From the original prototype's comments: https://gist.github.com/dstufft/2904d2e663461f010bbf
"- If there's a corner case not thought of, file is still Python and allows people to easily extend"
... and ...
"- Using Python file might cause the same problems as setup.py"
Also, how is the "lock file" actually distributed? Unless you can pip install a wheel and have it include an embedded lock file, then you've still got to have some out-of-band mechanism for copying the lock file around like you would a requirements.txt or even a fully-fledged virtual environment.
The main problem with requirements.txt, as I see it, is that you don't get exact versions unless you specify it in your requirements.txt. So you'd have to have a loose requirements.txt and then generate a second requirements file after having done `pip install -r requirements.txt` to get the exact versions that were installed.
Further, if you happen to "accidentally" `pip install some-package` in your virtual environment, your app might now be using different packages locally without you noticing. With Pipfile the need for virtual environments is pretty much gone, assuming that at runtime it will automatically load the version of a package specified in the lockfile, which is not clear to me yet from the README.
It's identical because bundler mostly got it right and dependency management in Ruby, while still not great/perfet, is better than just about everywhere else.
Kudos to python for moving forward.
But I'm sure I'm missing something. Please feel free to convince me I'm wrong :)
#!/bin/bash -xWHEELHOUSE="/usr/local/wheelhouse"[ -d "$WHEELHOUSE" ] || ( sudo mkdir -p /usr/local/wheelhouse/ ; sudo chmod -R 0777 /usr/local/wheelhouse/ )deactivateset -ecd .requirementsfor reqfile in requirements*txt ; do TEMPDIR="$(mktemp -d)" virtualenv -ppython3 "$TEMPDIR" . "$TEMPDIR"/bin/activate pip install -U pip pip install -U wheel pip wheel --find-links="$WHEELHOUSE" --wheel-dir="$WHEELHOUSE" -r $reqfile pip install --find-links="$WHEELHOUSE" -r $reqfile pip freeze | grep -v "pkg-resources" | sort > "../$reqfile" rm -rf "$TEMPDIR" donewait
...however, I strongly disagree on the benefit of making the `Pipfile` executable python. Just read this gist: https://gist.github.com/kennethreitz/4745d35e57108f5b766b8f6...
- This file will be "compiled" down to json.
We know it'll be abused; and we should have learnt our lesson from scons and setup.py that it wasn't a great idea before, and still isn't a great idea using python code itself as a declarative DSL. Just use a standard hierarchical file format (json, toml, xml, whatever)
Features of introspecting and editing `Pipfile.lock` should be rolled into pip and exported as a core python module; an api for editing pipfile.lock is a good idea, but executing a `Pipefile`, is not.
Incurring a tax is the entire point of replacing the old currency. A large portion of the black untaxed money is untraceable in the form of cash, so getting them into banks and taxed is the first step.
This article is overly negative about some negative things that are part of the expected outcome, that even the people who are proponents of the change admit. Of course there are obvious drawbacks to invalidating most of a nation's currency at once, the point is how this process benefits the country.
A lot of poorer Indians barter or use smaller cash denominations anyways, and the government isn't trying to crack down on that; comparing India to Nigeria in that sense doesn't matter. And reducing the economy of the entire country temporarily is very intentional- if a big part of your country's business dealings is with illicit cash, cracking down on it will of course reduce the market.
The idea is to sacrifice some money in order to reduce corruption and increase the trust in government and business systems. If that is successful, then this move is very beneficial for India in the long run.
The immediate reaction of the common man on de-monetization of higher currency notes (followed by replacement with newer ones) was euphoric. A popular thought process was that people can tolerate inconvenience for a few days because it'll invalidate the "black money"  and thus rein in inflation, property prices and corruption to some extent. However, the moment this was announced, people have been brazenly finding loop-holes to convert their currency notes through unofficial means. Example : back-dated receipts for expensive items (gold, cars, property), fake accounts, hiring poor folks for conversion via official channels, bulk purchase of coupons / cards for essential items that are permitted to accept older notes, etc..
 Untaxed money, undeclared income, ill-gotten money via bribery, fraud etc..
There's no war on cash, theres a refresh on the notes, its an attack on black money, its time to pay the dues if you've been hoarding illegal revenues. India isn't getting rid of cash, the comparisons to Nigeria are pointless.
There are multiple opinions and counter opinions but no one seems to know exactly how this is going to turn out. Its wait and watch for now...
The biggest threat looks to be the methods of laundering that are cropping up -- I'm starting to hear of instances where offline-chits and money transfer agents are accepting old currency without any markdown. So it looks like laundering hasn't received as large a blow as anticipated, and hence the experiment may very well fail by the Dec 30 deadline.
One positive thing to note is that a majority of the citizens are tired of the parallel economy and are openly supporting the government's moves. Hence it should give the Modi government the cover and incentive to add-in more stringent checks and regulations to take on the laundering. This is probably the only hope for any success in this experiment.
BTW, whether this experiment succeeds or fails, it's a net positive in the long run as it has shown: 1) that a government can take steps towards stopping the parallel economy, and 2) do so with the people support.
UBS is calling for the removal of the 100 bill in Australia and the same is happening in other countries.
Thankfully no matter if they manage the kill all cash. Cryptocurrency is in a way digital cash.
These institutions don't realize that their power grab is pushing people more and more into currency which they have zero control over.
Read this FT article instead:
P.S: Nigeria's per-capita income is higher than India in nominal terms (so is Africa's as a whole).
But once the core abstractions are settled, you start to reap its power. The type system catches tons of potential errors. Combinators allow for enormous expressiveness. Here you're rolling: Haskell is in its zone!
But then you hit a wall. Laziness makes for brutal debugging. Singly linked lists actually suck. Performance optimization is a black art. You find yourself longing for a language with simple semantics and mechanical sympathy. Now Haskell is bumping up against the real world.
Haskell has its sweet spot somewhere between "bang this out by 5pm" and "ship this to a million users". (No surprise it's popular in academia.)
Having said that, Time-to-Market is only partially influenced by my-code, the biggest part is the code that I don't have to write, i.e. third-party libraries.
In my Haskell adventures I am having trouble finding third-party libraries for even the most popular things, e.g. Cassandra. As far as I can tell there are two libraries, the `cassandra-cql` and the `cql-io`, the first hasn't been updated for a year now, and the second has only 3 stars, which makes me uneasy.
So, although I can see where the author is coming from, I don't think you can beat Java, Ruby, JS or Python in that sense. Unless of course your code/project doesn't have a lot of dependencies.
Not so ago there was some discussion about "batteries" included with Haskell.  It compared situation of Haskell with that of Python/Java etc, worth reading if you are about to go the Haskell route.
It seems, the priorities (academic, commercial, library support and so on) of the members of Haskell community are at crossroads and they cannot seem to resolve those very good, IMHO.
My take: Haskell is good for learning some really deep concepts, but may be not so good when it comes to commercial projects, unless you are a Haskell veteran and also have an army of Haskell veterans with you.
I would have liked to see a comparison to other functional languages (say Elixir or OCaml) and not just Java and C#. I'd also argue that picking Java instead of a more agile environment (there are some cool lightweight Java frameworks but most people will associate it with the rather heavy enterprise stack) when comparing regarding time to market is a bit odd. Granted I'm mostly thinking about webapps (but the article mentions Yesode).
Still a nice article (since my post sounds overly negative upon rereading).
Maybe with F# author would see decrease of development time compared to Haskell?
Have we actually seen that or have you just asserted that? Is this really true, and if it is, by how much? Haskell has been around for a couple of decades now, and has had least two hype cycles (I remember that when I was in university in the late '90s, Haskell was the next big thing). It does not seem to expand significantly even within organizations that have tried it (and that's a very negative signal), with at least one notable case where the language has been abandoned by a company that was among the flagship adopters.
In general, we know that often linguistic abstractions that seem like a good idea in theory -- or even seem to work nicely in small programs -- don't end up having a significant effect on the bottom line when larger software is concerned. People say that scientific evidence of actual contribution is hard to collect, but we don't even have well-researched anecdotes. Not only do we not have strong evidence in favor of this hypothesis, but there aren't even promising hints. All we do have is people who really like Haskell based on its aesthetics and really wish that the the nice theoretical arguments translated to significant bottom-line gains.
This blog post by Dan Ghica, a PL researcher, really addresses this point: there is nothing to suggest that aesthetically nice theory translates to actual software development gains, and wishful thinking (or personal affinity) simply cannot replace gathering of data: http://danghica.blogspot.com/2016/09/what-else-are-we-gettin...
"two's complement" is not a different system for arithmetic that includes a "sign bit", it's just a different encoding or labelling of states which happens to have a bit that reflects the sign. So, inputs to this calculator can be said to go from 0-15, but more interestingly it can also add numbers in the range -8 to +7 (and therefore, it can also subtract, though it can't negate so you'd have to manually do that to your input by performing a different encoding table lookup).
(edit: now I'm realizing you could negate by performing a two's complement multiplication by -1, performed using this calculator via a sequence of 3 (shift+adds) of your input number to itself... that's correct at least up to some fencepost)
And then by extension, you could test "what about treating the range as -10 to +5", would that encoding succeed or break down? for starters, you would no longer have a sign bit...
 - http://www.blikstein.com/paulo/projects/project_water.html
I did that for a 4bit knex adder/subtractor
I spent a year investigating blockchain applications for JPM so I've seen most of the permissioned blockchain solutions that are out there. Unfortunately, none of the solutions I saw were real blockchains. The more technical groups in banks (which GS has) surely realize this as well. The solutions I've seen consistently lack at least one of the following necessary features:
- Fast, Deterministic BFT Consensus (mining doesn't work as intended in private contexts)
- Smart Contracts (you need a deterministic language)
- Signed Transactions (TLS based authentication trades security & audibility for speed)
- Cryptographic data structure for transaction storage
There are other features you need, but these are the big distinguishers I noticed. While technically you don't even need smart contracts or fast BFT consensus, I believe the tech isn't useful enough to justify the migration costs without them.
Disclaimer: I'm a founder in the private blockchain space and founded specifically to make an infrastructure that addresses these issues.
: http://kadena.io/blog/MiningInPrivate.html: http://kadena.io
I predict we will see a huge hammer come down from SEC & IRS surrounding ICOs in 2017 as well.
I think we've to overcome https://en.wikipedia.org/wiki/Fear,_uncertainty_and_doubt to embrace blockchain
2. Get a CDN for static assets
3. Use Webpack, again, for making development easier
Saved you a click.
By "improve" they surely mean reduce. How can a measurement be reduced by 500%? If load time has decreased to one fifth, it would have been clearer to write "load time was reduced by 80%"?
It is inherently buggy in numerous ways. It hardcodes the number of arguments a syscall has incorrectly. It screws up compat handling. It doesn't robustly match entries to returns. It has an utterly broken approach to handling x32 syscalls. It has terrifying code that does bizarre things involving path names (!). It doesn't handle containerization sensibly at all. I wouldn't be at all surprised if it contains major root holes. And last, but certainly not least, it's eminently clear that no one stress tests it.
If you really want to use it for production, invest the effort to fix it, please. (And cc me.) Otherwise do yourself a favor and stay away from it. Use the syscall tracing infrastructure instead.
Anyways this invites the question, are you allowing your production servers to make outbound internet connections? Generally, I would proxy outbound connections and/or use internal mirrors and repos for the installation of software.
PS: Would be happy to take anyone else's recommendations for videos or books too.
This actually sounds like a very good move by Microsoft. Just issue people a phone and they will do all their work on that. There's really no need for giant workstations anymore, and I think this will be more successful than a Chromebook-type thing.
The binary file format on Windows is called PE (portable executable). I wonder if this might possibly be a fat binary format.
Apple will stay Apple. I don't think they'll go anywhere.
The question is Google. If this happened in 2008, I don't think Android would have taken off anywhere close to the way it did.
But now? One one hand, Android has millions of apps already on the market. On the other hand, Microsoft now has potentially millions of old, existing, applications.
I don't think it will make a dent in the phone market. It's too commonly used as a hand-held rather than a station, and windows apps are useless there.
On the other hand, it can tank the Android tablet market
Also, maybe Microsoft will have the guts to do what Google never did: standardize ARM processors, so that all ARM devices can be updated at once. Although I assume Microsoft will also start by supporting "Qualcomm-only" at first, just like it did for phones.
Although they were formed by a different process, this does sounds kind of like Earth's oceans. Do they not qualify as a geological formation?
In all seriousness, this is very interesting and sounds kinda similar to glaciers.
Kestrel will be one of the best when it comes to benchmark
 added video url
The bandwidth costs are so far out of line with what the network transfer actually costs, it just feels like price fixing between the major cloud players that nobody is drastically reducing those prices, only storage prices.
Charging 5 cents per gigabyte (at their maximum published discount level) is the equivalent to paying $16,000 per month for a 1 gigabit line. This does not count any operation costs either, which could add thousands in cost as well, depending on how you are using S3.
There are several providers that offer a unmetered 1gbps line PLUS a dedicated server for ~600-750/mo. Providers like OVH offer the bandwidth for as little as 100/month. ( https://www.ovh.com/us/dedicated-servers/bandwidth-upgrade.x... ) I am just not sure how amazon can justify a 160x price increase over OVH or a 30x increase over dedicated server + transfer.
For the time being, the best bet is to use S3 for your storage and then have a heavily caching non amazon CDN on top of it (like cloudflare) to save the ridiculous bandwidth costs.
I had wanted Amazon to wrap it in something where they managed that complexity for a long time. Looks like they finally did.
Now the only thing Amazon needs to do is expand free tiers on all of their services, or at least very low cost ones. I prototype a lot of things from home for work -- kinda 20% time style projects where I couldn't really budget resources for it. The free tier is great for that. All services ought to have it -- especially RDS. I ought to be able to have a slice of a database (even kilobytes/tens of accesses/not-guaranteed security/shared server) paying nothing or pennies.
And even worse, there is no way to prerender an SPA site for search engines without standing up an nginx proxy on ec2, which completely eliminates almost all of the benefits from Cloudfront. This is because right now S3 can only redirect based on a key prefix or error code, not based on a user agent like Googlebot or whatever.
This means that even if you can technically drop a <meta name="fragment" content="!"> tag in your front end and then have S3 redirect on the key prefix '?_escaped_fragment_=', that will be a 301 redirect. This means that Google will ignore any <link rel="canonical" href="..."> tag on the prerendered page and will instead index https://api.yoursite.com or wherever your prerendered content is being hosted rather than your actual site.
Not only is it a bunch of extra work to stand up an nginx proxy as a workaround, but it's also a whole extra set of security concerns, scaling concerns, etc. Not a good situation.
edit: For more info on the prerendering issues, c.f.:
Hardware is usually not a business' main cost but it does matter for home users, small businesses or startups that didn't get funded yet, some of whom might consider Tarsnap or some other online storage solution which uses Glacier at best and S3 at worst. Now you could suddenly be 7 cheaper off if you do upkeep yourself (read: buy a raspberry pi) and if you throw away drives after one year.
However using Glacier as a simple store from the command-line seems horribly convoluted:
Does anyone know of any good tooling around Glacier for the command line?
By the way, I wrote article, how to reduce S3 costs:https://www.sumologic.com/aws/s3/s3-cost-optimization/
Either way, good news on the storage price reductions :)
Shouldn't that be $5.025? Or did I misunderstand?
Am I understanding this right? $0.023/GB/month for Glacier, so * 12 months/year = $0.276/GB/year, which means:
10GB = $2.70/year 100GB = $27.00/year 1TB = $270.00/year ...
So considering a 1TB hard drive  costs $50.00, how is this cost effective? I can buy 5x1TB hard drives for the price of 1TB on Glacier.
I understand there is overhead to managing it yourself. So, is this just not targeted to technically proficient folks?
- The price reduction on S3 is good! Kudos AWS.
- The price change on glacier is a fucking disaster. They replaced the _single_ expensive glacier fee with the choice among 3 user selectable fee models (Standard, Expedited, Bulk). It's an absolute nightmare added on top of the current nightmare (e.g. try to understand the disks specifications & pricing. It takes months of learning).
I cannot follow the changes, too complicated. I cannot train my devs to understand glacier either, too much of a mess.
AWS if you read this: Please make your offers and your pricing simpler, NEVER more complicated.
(Even a single pricing option would be significantly better than that, even if its more expensive.)
The telescope itself is "scriptable" using a (truly ancient) version of JS via a really old implementation that I've seen referenced but can't find right now. There's a lot of open information about the JWST, but it's not widely reported on. Definitely worth checking out if you're interested in space, technology, and the systems we actually deploy into the void!
Some papers: http://www.stsci.edu/~idash/pub/dashevsky0607rcsgso.pdf
There's an interesting contrast between NASA's and Elon Musk's idea of what space exploration should be like; the former spends most of its efforts on one-off projects, whereas the latter is focused on making things cheap and repeatable and achieving reliability by iterating on a design rather than getting it perfect the first go-around.
Both approaches are needed, and certainly NASA paves the way for others to come along and do the same thing cheaper once it's been proven.
Ugh, that does not sound encouraging. Organizations love to cut testing and QA. After all, all they do is cost money, cause delays, and 'create' problems. Cutting them is an obvious way to bring a project under budget and on schedule.
Does anyone know what changes were made to testing?
We're going to be unhappy, and the advancement of astronomy delayed a long time, if something goes wrong. I don't see the next President and Congress wanting to raise revenue and spend more on science.
If you happen to be in the LA area, JPL hosts these talks once a month at their facility in Pasedena. They're free and open to the public, and they're always interesting. And if you can't attend they're also posted to the JPL website.
It's the F-35 of astronomy.
I feel a little better know. I was wondering why we would name a space telescope after that guy.
A couple of days ago I was reading some POSIX book from 1991 and there the layout of /bin /lib /shared /usr/name/bin /usr/name/lib /usr/name/shared and so on was much more logical than what we have now which is just weird as far as I can see because I don't understand it.
(Unless the simple description is Not Invented Here syndrome, which I imagine it's not).
I had a quick look at the website linked at the end but it's just a site with a list of projects like a text editor and some game debugging components.
...and then abandoned it after getting me excited about a possible Emacs replacement...
I feel sad when I find that web programmers think there are only web stacks and "programming" refers only to web programming.
But SQL is very successful so maybe they'll do okay anyway.
If it can't, then "your entire programming stack" is excluding the kernel and a lot of existing libraries/code.
I'm looking forward to compatibility with semantic Web technologies.
"THE PACIFIC TSUNAMI WARNING CENTER HAS ISSUED A TSUNAMI THREAT MESSAGE FOR OTHER PARTS OF THE PACIFIC LOCATED CLOSER TO THE EARTHQUAKE. HOWEVER... BASED ON ALL AVAILABLE DATA THERE IS NO TSUNAMI THREAT TO HAWAII."
//edit Any idea when the tsunami will hit? The original news happened at 20:59 UTC (1pm PST), but I'm not sure how fast tsunamis travel.
A sister comment referenced tsunami.gov, which is for US dwellers, but this NOAA website has more information for people living outside the US:http://ptwc.weather.gov/?region=1&id=pacific.TSUPAC.2016.11....
ESTIMATED TIMES OF ARRIVAL -------------------------- * ESTIMATED TIMES OF ARRIVAL -ETA- OF THE INITIAL TSUNAMI WAVE FOR PLACES WITHIN THREATENED REGIONS ARE GIVEN BELOW. ACTUAL ARRIVAL TIMES MAY DIFFER AND THE INITIAL WAVE MAY NOT BE THE LARGEST. A TSUNAMI IS A SERIES OF WAVES AND THE TIME BETWEEN WAVES CAN BE FIVE MINUTES TO ONE HOUR. LOCATION REGION COORDINATES ETA(UTC) ------------------------------------------------------------ KATSUURA JAPAN 35.1N 140.3E 2150 11/21 KUSHIRO JAPAN 42.9N 144.3E 2217 11/21 HACHINOHE JAPAN 40.5N 141.5E 2236 11/21 SHIMIZU JAPAN 32.8N 133.0E 2312 11/21 NOBEOKA JAPAN 32.5N 131.8E 2319 11/21
"Right now the water temperature is 27 degrees and the water temperature will not rise to dangerous levels... for a while."
Stronger by far than any quake I've ever felt in 8 years in San Francisco.
They should really provide an English version of this page. Come on, Yahoo Japan.
It seems to have happened again. I wonder why you don't hear anything in scientific circles about this.
Relevant reddit thread for this incident.
Also, /u/TheEarthquakeGuy should be posting soon.
Every piece of reality has its own IE - Unfortunately.
I'm asking because this is a style that is used a lot in cartoons.
Somewhat relieved learn that SVG SMIL animations are staying in Chrome, for now:https://groups.google.com/a/chromium.org/d/msg/blink-dev/5o0...
"Each car costs approximately $1.2 million to build."
EAF uses debug registers which limits its usefulness and the ROP mitigations are becoming less useful because of CFG (control flow guard). Although the latter does require applications to be recompiled with the latest Visual Studio (and Opt-In to using CFG which is not enabled by default). It's not really surprising seeing Microsoft retire EMET considering you can get nearly the same kind of coverage on a vanilla Windows 10 install.
I made a rough guide as to the layout of the MitigationOptions QWORD which controls these mitigations:
There are Microsoft provided functions which can also enable these mitigations when compiled into the code. Also lets not forget that for now EMET still works fine with Windows 10.
not sure when it was original posted, as its noted there is an update today Nov 21st 2016, and that is the same date of the article.
basically, Windows 10 doesnt use EMET, and MS claims its because Windows 10 has other mitigation techniques making it more secure.however, as per the article, there are many mitigation steps not included, and many require application to be compiled specifically for EMET replacement mechanisms.
the update to the article today is Windows 10 support more than previously in latest release, however still doesn't support everything EMET provides.
Now I'm not sure - Windows 10 doesn't have the full featureset, and I don't _think_ Microsoft is likely to actually introduce the entire featureset into Windows 10 with much lead time before the EOL.
If they do, though, it would certainly be a nice carrot AND stick to get people up to at least a certain update version for that functionality.
The difference is that GNU is a lot smaller and has a lot less power and resources than Apple and Intel. So much that it is relatively easy and, in fact, explicitly allowed by the GPL, for someone like Yamamoto to come along and decide that, by golly, Emacs will have unique macOS-only features and Apple deserves to have more money to do whatever it wants to its users because its users enjoy Apple's treatment, smelly GNU/Beards be damned. This is a lot easier than fixing icc for AMD CPUs or putting headphone jacks into iPhones 7.
> We are not welcome, and never will be.
You are welcome. You are very welcome. Apple is not. You should not identify yourself with Apple's operating system.
Homebrew tap here: https://github.com/railwaycat/homebrew-emacsmacport
The point of free as in speech and not free as in beer is that the choice you make does not need to be right or wrong by the standards of other humans. Holding back technology because it's not available on "your" platform is just as monopolistic as any corporate entity they have butted heads with.
- /* Don't use a color bitmap font unless its family is - explicitly specified. */ - if ((sym_traits & kCTFontTraitColorGlyphs) && NILP (family)) + /* Don't use a color bitmap font until it is supported on + free platforms. */ + if (sym_traits & kCTFontTraitColorGlyphs)
GNU Emacs is part of GNU/Linux. Why are you surprised that [other OS] is a second-class citizen for a project which already has a clear OS target.
Besides, the guidelines for GNU packages clearly states that GNU packages cannot emphasise features of proprietary OSes. So it's not really the maintainer's fault, it's one of the rules for GNU packages (you can blame the FSF, but you've got to look at it from their perspective).
GNU packages are different from other projects, because they're actually part of an OS. So they have an obligation to support the OS they're a part of more than other operating systems (especially proprietary operating systems).
The post also misses the second half of the paragraph, which suggests a method to get emoji working again:
If some symbols, such as emoji, do not display, we suggest to install an appropriate font, such as Symbola; then they will be displayed, albeit without the color effects.
- well, right now, it's just bloating Emacs for the rest of the world. If one needs it on MacOS, I'm sure it can be added it to a personal installation.
I must say, Emacs still runs better on macOS than Xcode does on Linux.
Users of non-GNU systems that use GNU software: the FSF is actively trying to make your life more difficult.
The FSF don't want to make their software better on non-free operating systems which, given their goals, doesn't seem particularly unreasonable to me.
This works because it assumes you have named your local identifier either `markdown` or `md` and are using that. If you want to use something else you need to specify a custom pragma: https://github.com/threepointone/markdown-in-js#pragma. And that feature didn't exist until 6 hours ago.
Why is this dangerous? Let's imagine you use `markdown` and you chug along for another 5 months on this project and you need to use `mdown` in some file, or maybe a coworker uses `mdown` because why not? So it doesn't work. And you're confused. And you spend hours trying to figure out why your app is broke.
I don't know whether it's a good idea, but I like it so far.
Why is this not a component?
Original - http://coffeespace.org.uk/loader-orig.jsMinified - http://coffeespace.org.uk/loader.js
Why? I don't know, it seemed like a great idea at the time to get my clients doing all the heavy lifting. Page download times are typically less than all other pages given how rich the content is.
Now it seems they're leaving people who depended on that behind. No company offers an ecosystem that doesn't require "fiddling" to get things to work correctly. Maybe this is the way it has to be, but I really wonder what Apple's strategy is going forward, because its clear that they've slowed down or stopped development on everything other than their phones/pads and the occasional laptop. What are all their engineers doing? What is the use of having hundreds of billions in the bank if you're not investing it in growing or creating product lines?
On a relative basis, Apple has infinite resources. It has cash, the brand and can attract the right people to run the business. Each product line, like the routers and mac pro can be focused on because they have the resources to do it. Most companies re-focus on core products because they are spread too thin - Apple is not.
iPhone 7 looks like iPhone 6 sans the headphone jack. So no design changes for 3 years. To me, 7 does not feel like a significant update over 6.
Macbook Pro got a touch bar. Otherwise, minor design changes since last revision. Does not feel like a significant update.
iMac not updated for 12 months. No significant design changes for years, but the screen resolution is now Retina.
Macbook Air not updated since March 2015 (still low resolution). No significant design changes since introduction.
Mac Mini not updated since 2014. No design changes since 2011.
Mac Pro not updated since 2013.
iOS 10 and macOS are minor revisions.
Thunderbolt display and now Airport extreme/express are dead
The iPad Pro 9.7" looks like iPad air (1 or 2). iPad pro 12.9" looks like any existing iPad but bigger. The iPad minis all look alike.
They didn't release new iPads this fall. Isn't that a first?
It feels like the hardware line-ups are getting more confusing: Two different iPad sizes called Pro as wells as "Air 2" and the minis. It made sense that the Pro was the largest one, but they confused us by releasing a smaller Pro that looks like an Air 2, but has a better display than the large Pro. How many iPads do we need?
There is the main iPhone line (... 6 6S and 7) that comes in two sizes, and then the evil cousin called iPhone SE which looks like a 5.
The laptop line is getting more messy too. The Macbook is like a slower Macbook Air but with higher resolution and 12". They killed the 11" Air, but we now have 3 laptops at 13" (Air + two types of Pro). Is the 13" Pro without touch bar option really necessary?
All these series ("", Air, Pro, SE, Mini) which pop in and out of existence feels like they are trying different names for marketing reasons (especially for the iPads).
I appreciate the yearly impressive but predictable CPU/GPU and software improvements, but it is really starting to feel like they are either struggling a bit, or working on something that takes a lot of resources from non-essentials and focus.
The best run technology companies are run by dictators with a strong technology and product background (Jobs, Elon Musk, Reed Hastings). Let me say it again... Yes dictator. You are never going to get a large group of people to agree on something, if you do it's watered down, compromised, and lacks ambition.
Given that's no longer the company we're being left with, is there a window of opportunity for a new entrant to step in and start filling that void using the same principles of design and cohesion that Apple have made famous? A sort of "Apple for Nerds".
 BTW, this is not Google, nor should it be any company where the "customer" is actually the product.
This is what happened to Microsoft, and it took many years and a major internal upset to get them back on a positive track. And now look at Microsoft since they've started diversifying and innovating again: they are providing an OS (and hardware) which is genuinely interesting to professionals in a variety of fields. They are going to steal Apple's thunder here soon, unless Apple really makes an effort.
Now what we are witnessing is the product people are not in charge. The business folks are. So you start to see them tell the product folks they can't do something because it is not cost effective and that is profound.
I recently decided that Apple's ecosystem no longer works for me, and that I'm gradually going to start putting most of my tech eggs into other baskets.
And whaddaya know? Between the monitors, the wireless routers, and the high-end professional workstations, Apple seems to agree with me that their ecosystem no longer works.
I guess it's nice to have some confirmation.
Wifi though has always been a very big PITA for consumers, and Apple's hardware/software integration has always been a better bet for ease of configuration, and honestly, reliability.
The optimist in me hopes that maybe they'll have something better for us, or are making an acquisition to replace their current product lineup completely.
The pessimist in me thinks that maybe they're leaving this market to avoid needing to develop hardware that meets its publicly stated standards for protecting consumer privacy. Potentially they have been approached/mandated to enable some kind of backdoor in it, and they chose to stop producing it, rather than comply. /tinfoil_hat
I felt this coming as long as Apple never added iOS backup support to the Time Capsule. The APFS migration seemed like it would be the final bullet.
Had to ditch my latest-model Time Capsule when I got fiber since Apple doesn't let you change the MTU (required for my PPPoE over fiber), and getting faster Time Machine backups than Apple's anemic CPU could muster was just a plus. I was hoping to reuse it as an 802.11ac bridge to my TV/Media center, but, nope, Apple removed wireless bridging as a feature a couple years ago.
edit: just got reminded they spent a bunch of money developing special paper and ink for a $300 book instead of this. OK, yeah, no there's logic here.
Apple employs people who are a very passionate about wireless technology. VERY passionate. They just don't get the resources or freedom to take their products to the next level. They have to wait for someone from above to be sold on the idea. Yet, it never happens because the engineers are told to keep fixing bugs and keep things humming along. File a radar, and keep doing your job.
Probably what happened, a few senior engineers got yanked onto the Watch project or this fabled car thing. They worked and worked and worked on it, and left the junior guys fixing macOS/iOS airport bugs. Then when it came time to build a new revision, they noticed suddenly they were WAY behind the market. Meanwhile, engineers continued fixing macOS/iOS bugs, this time for 3rd party systems, and bam, someone in upper management probably asked, "umm, why are we still building our own thing?" They probably merged the OS wireless teams with the driver team and called it a day.
So here we are today, Bloomberg most likely got a tipoff from a disgruntled employee who didn't like they were killing the project.
But this move seems to indicate a fundamental misunderstanding of the Apple ecosystem. Starting with the WiFi in a consumer's house, today, the Apple 'System' provides you with the entire experience from simply using your iPhone/iPad to easily playing your music on your speakers through AirPlay to watching content on Apple TV to backing up your computers via TimeMachine and providing remote access through Back-To-my-Mac.
Further, the devices provide guest and "Mesh" like functionality to have an extended WiFi network -- before mesh wifi technology existed.
All this and easily configured though, most appropriately, an app.
Shutting down this most fundamental base functionality of AirPort to me is a signal that Apple really doesn't want to be in the business beyond iPhones and iPads.
This coming at a time when the competition is actually ramping up providing WiFi devices (Google WiFi) and providing something that Apple has had for years.
It's mind boggling.
Sad day for me.
I know a lot of people who have worked, or currently work, at Apple, and a lot of other Apple fans who buy almost all of their technology products from Apple. I don't think any of them ever bought an Airport.
Airports were a lot more expensive than most consumer routers, and ease of setup is not a huge differentiator. With most routers, you go through the pain of setting them up the day you buy them, and that's it. Even non-technical don't seem to have too much trouble setting up their generic consumer routers.
I would much rather see Apple focus its engineering resources on a good iPhone 7 than waste them on a wifi router that is not nearly as important to the company's ecosystem and revenue.
All of the current changes are taking away.
Aesthetics were and are a huge part of the appeal of an Apple filled desk. Not a one of the other makes gets close yet Apple haven't even asked LG to make their 5k screen look nice or even complementary.
After the underwhelming MBP with rubbish travel-free keyboard, and not having a monitor to sit alongside my iMac, they seem incoherent. Where's the new Mini, Pro, 34" curved Cinema screen or iMac?
We just need someone else to discover aesthetics and they have a real problem to contend with. I care what the overall look is of things in my home.
Why allocate the manpower when the others on the market have caught up with ease of use and this generates less than 1% of your revenue?
My solution has been to use a TP-Link router running OpenWRT as the router, and a UniFi as an access point. I run the Ubiquiti controller on a separate server. This gives me very good performance and coverage, but the ease of use is zilch. It requires me to know way more than an average person should need to know just to get the network set up.
I have never used the Airport routers from Apple, but I understand that they aimed at fixing a lot of these issues. Ultimately I chose not to go with them because they were still not a turnkey solution, while not giving me nearly the performance (speed and coverage) I wanted.
I wonder who will be the go to recommendation from now on.
Here's to hoping AC is good enough for the foreseeable future. I'm presuming it will be for the average joe.
Can anybody name a solid alternative for $299?
With Apple's resources the router division cannot have been a distraction, nor would not having it materially or even to any large extent, affect Apple's numbers. Maybe this reflects a retrenching mindset taking hold within Apple?
Google is definitely winning some points in the "simple and works" category. Pricey but worth it, IMHO.
I would recommend pairing a couple Unifi AP AC Pro Access Points (with a maximum throughput of around 1.3Gbps on the 5Ghz band) with a Unifi Security Gateway and for most scenarios you will be done and probably never look back. If you really are into networking you could also take a look at their EdgeMax series which has some router features the more simple Unifi products do not provide (like a shit ton of routing features, and the ability to hook it up directly onto a fiber cable using the provided SFP port)
Take a look at https://www.ubnt.com/ if you are interested. Hope this helps :)
I was skeptical of the predictions of apples implosion once Steve was gone. How important can one guys contribution be?
Time to look at something like Ubiquiti I guess.
BUT, I suspect this will prove very short-sighted in the long run, and possibly endanger the company as a whole. The one thing Apple is (it seems) overlooking is the ecosystem effect. While some may not buy Apple monitors or routers, many do. Sure they last a long time and you don't need to constantly buy another one, and there may be viable substitutes, but if you're one of those people who considers themselves Apple-only (I have a non-techy friend like this) then eliminating Apple products just reduces their attachment to all things Apple. Pretty soon they start realizing that, hey, this non-Apple router works just fine! Hey, this non-Apple monitor works just fine, too! And the importance of the brand is reduced. Then they inevitably start thinking about how maybe a non-Apple laptop might be just as good (too) and cost a boatload less. Or maybe the new Pixel phone is as good as their aging iPhone. After all, they've heard good things about the camera.
So, yes, in strict accounting terms it may make sense to eliminate product lines with thin margins and low volume, but on the whole it props up the whole brand. Personally, I like knowing that I could go buy an Apple monitor. I know they're exceptional quality, and beautiful to boot. I may not have the cash to do it now, or even soon, but it's something to look forward to. Same with their excellent routers. But now? I guess I'll have to keep an eye on what Microsoft is up to.
It's ironic to me that Apple has lost sight of what supports and nurtures their brand, while it seems Microsoft has discovered Apple's secret sauce. Microsoft is creating beautiful expensive products which are almost certainly thin margins and low volume, but is doing so because of the cachet that comes from doing so. From the perception that the Microsoft brand is associated with beautiful, high quality, electronics. This rubs off on everything else they do. And man, do those new Surface Studios look nice.
I've been gradually moving the whole family to Mac-land, but now I may have to rethink. It seems Apple has lost touch and is now run by accountants and analysts--not designers and engineers.
1. The router they get from their ISP is good enough, they can get support for their internet access and their wifi from one place and they don't mind paying the $7-$10/month. This seems like it would be the easiest for the non-tech user.
2. A combined router/cable modem is desired but you don't want to pay $7-$10/month. Buy a device that is compatible with your cable modem -- again something that I don't see Apple making.
3. You want a separate ISP modem and more advanced wifi router -- the only case that an updated Airport would satisfy.
4. You want an easy to configure mesh network for better coverage -- buy some Eero devices.
5. You want a versatile travel router -- one that can serve as a regular router (ethernet -> wifi bridge), a wifi->ethernet bridge, a wifi->wifi bridge (to create a private network from a public network), or an extender. There is already a $30 router that can accomplish that. I have one of these:
And yes, it actually is the AirPort, as said devices actually work fine on other networks.
I honestly think they have become myopic. They aren't seeing the big picture of what some of the less profitable products and features are accomplishing. They just aren't making enough income.
I'm sure Apple of all companies knows this, but revenue of a particular product doesn't necessarily show the whole picture when some of your brand's appeal is that customers can buy all Apple products and be fairly confident they work well together.
Separately, I'm very happy with Ubiquiti products, but I'm also a power user.
I know there is enough hate towards Apple here and I will probably still buy their new 15" Macbook, but I'm hoping for either Apple to get back to what they did before, or some other company stepping in and taking over the things Apple stopped doing.
Looking at Google for example: Started building high-end phones (iPhone), released a pretty good router (Airport), actively pushing chromecast (AppleTV). What if Google were to make a powerhouse of a laptop that's not running ChromeOS? Could that be viable?
The next Macbook I buy will hopefully last another 5-6 years, or hopefully longer. So for me 5-6 years for Apple to build something truly impressive. Who knows? Maybe Apple really does have a broader vision that we just don't see yet.
- Steer Apple towards its more established areas of expertise, where margins and competitive advantage is high, and the rest of the competition is dismal;
- Clip non-innovative departments where purpose and identity are lost;
- Concentrate resources where Apple's leadership is comfortable in;
- Sunset all else.
For the rest, I'll repeat what everyone says. The Airport is an essential part of the Apple ecosystem. It just works. I'd be surprised if they actually drop the product.
I have heard a few people say that their Apple router has been very reliable, so maybe it has to do with it running NetBSD, or at least I would like to think that is the cause.
It's too old and can't be managed with the new version of the Airport Utility. The old version (which was very difficult to find) wouldn't run on a modern MacOS.
Fortunately I have a 10 year old Mac Pro and was able to download the old version of the software and make it work, but it's just not worth the effort every time I have to reconfigure it. IMO the Airport Utility software was already pretty wonky, it was a bit confusing to try to connect to the unit. You'd have to do a few resets of the device before it would show up.
Once it works, it works GREAT though.
Oh well, I've got all Meraki gear in my house and it works flawlessly. 4k streaming over wifi, no problem. That said, I was lucky enough to buy a house wired with cat5, so all of my bandwidth-hungry devices are wired anyway.
Ubiquiti is probably the way to go for most people though. Get rid of the consumer junk.
Why bother developing in that space when you can just buy whichever startup matures to the largest market share? Throw those engineers somewhere else.
I have never been happier with my internet setup. The only time I have had issues was when FIOS was out. The only restarts I have had to do was when my power went out. It has been close to two years now and this is the least amount of maintenance I have ever had to do.
The Timecapsule was pretty much plug and forget, easy to use. Now when it comes time to retire the Timecapsule I'll have to look outwith the Apple ecosystem and find something else, tinker with its configuration (more than likely for hours) and hope it continues to support my devices with firmware updates.
I'm sure my QNAP NAS has Timecapsule functionality but I'm new to the NAS world and I don't think my onsite backup is something that I feel comfortable trusting with it yet.
Although my old Apple router is still chugging along perfectly well, I'm bummed about this change. First router I have ever felt 100% comfortable with the configuration and operation.
Wireless networking falls into this category. Yes a wired home network would be faster by for the vast majority tasks wireless is fine.
Why would Apple need to upgrade the AirPort Extreme anyways?
Just try to imagine Steve Jobs setting up a generic router.
* * *
Now imagine him searching for a wireless router and seeing some Google product in the top rankings.
(EDIT: I tried to play with the "Steve wouldn't allow this" trope and failed removing it to reduce the noise)
There are still wifi routers that you can use. Which one you choose makes almost no difference whatsoever. Just like with Apple, you can go out and buy a random wifi router, take it out of the box and plug it in, and it will just work. Stop freaking out.
Love, the non-Apple universe.
I would have gone with a server approach to a wi-fi router, one that does everything in MacOS Server - email, VPN, web, etc..
Basically the hedge fund fee structure extended. So someone who wanted more risk would give a higher cut of fees for higher gains and a lower cut of fees for lower gains, and vice versa. The average expected value to the manager should be a constant percentage of assets, but the distribution changes for each customer.
One cool thing that naturally falls out of this idea is negative fees: if someone is risk averse enough, then the incentives require the manager to lose money if the customer loses, which causes the manager to be risk averse as well for those funds.
(I have more detailed thoughts on this that this margin is too small to contain; feel free to email me for some disorganized elaboration on the above.)
The products and potential returns that honest legit RIAs discuss with potential clients will always be unappealing compared what competitor, dishonest RIAs (who are willing to exaggerate) will be offering.
One lesson from the election: it's hard to convince people of a reality they don't want to hear and warn them of others who are promising wildly optimistic scenarios are not being totally honest with them. Potential investors want to believe exaggerated talk of huge returns by dishonest RIAS and honest RIAs lose clients because of this.