Systems Engineering teaches the concept of POSTED: People, Organisation, Support, Training, Equipment and Doctrine. When a system is developed, it must give consideration to all of these aspects. Failing to do so means you design an incomplete system.
In this case, Uber has developed a piece of Equipment, their God Mode view. Franken's asking about the other pieces of the system, such as the training, support and doctrine and people. These are equally as important to design, document and implement. Failing to give due consideration to these aspects of the system is no different to having an incomplete equipment solution developed. I'm interested to see whether Uber gave due consideration to these aspects of the system.
There's something to be said about startups moving fast to develop technology but not necessarily the other aspects of a complete system. Mature systems engineering / software development firms do this day in and day out. Yes, it can lead to slower iteration on the core technology and capabilities, but it is critically important to consider. I suspect it's often a pinch point when start-ups try to scale, for example when a piece of technology then needs to consider user access rights, etc.
Not to mention all of the lobbyist pressure Uber is experiencing on the business side of things, this is the kind of stuff taxi driver unions and companies/entities threatened by Uber's business model can only dream of getting. They are not doing themselves any favours here.
It seems that Uber have well and truly put their foot in it this time over any of the other controversies and scandals that have involved the company. And yet, after all of this, Emil Michael gets to keep his job? Seems to me the only way Uber can start to make amends and repair their broken image here is to make some effort and fire Emil.
Jurisdiction: (1) Oversight of laws and policies governing the collection, protection, use and dissemination of commercial information by the private sector, including online behavioral advertising, privacy within social networking websites and other online privacy issues; (2) Enforcement and implementation of commercial information privacy laws and policies; (3) Use of technology by the private sector to protect privacy, enhance transparency and encourage innovation; (4) Privacy standards for the collection, retention, use and dissemination of personally identifiable commercial information; and (5) Privacy implications of new or emerging technologies.
The barrier to entry for an Uber competitor is quite low and trust is the only thing that keeps Uber afloat. If they don't realize that and act accordingly, they could die relatively quickly.
But how Uber could benefit from answering this fishing expedition? If I were Uber I'd simply stonewall. They are under no legal compunction to answer, and virtually no answer would help them.
I just think it's incredibly naive to think that they wouldn't use it in any way possible.
In all reality, maybe he said these things just to get the free advertising? It's too bad it's come to this? That said, is there any freeware uber/Lyft type code floating around? Just curious?
Though I have no real idea what I'm talking about...
This feels intuitive to my mental picture of the universe.
The description of this large scale structure and the expansion of the universe has always put me in mind of watching the patterns form and reform from drips in a soapy sink or an elastic fabric being pulled apart.
In both cases, you end up with these big expanses bordered by dense stringy areas. That the motion of the stuff that snaps / shears / collapses or whatever into these strings and knots would be aligned seems perfectly logical.
This seems like it might be a breakthrough result.
Do quasars that aren't parallel to their large-scale structures not have a significantly polarized signal? Maybe interference from the structure or a weaker signal because of their alignment?
"We made incorrect assumptions about the Express.js API without digging further into its code base. As a result, our misuse of the Express.js API was the ultimate root cause of our performance issue."
This situation is my biggest challenge with software these days. The advice to "just use FooMumbleAPI!" is rampant and yet the quality of the implemented APIs and the amount of review they have had varies all over the map. Consequently any decision to use such an API seems to require one first read and review the entire implementation of the API, otherwise you get the experience that NetFlix had. That is made worse by good APIs where you spend all that time reviewing them only to note they are well written, but each version which could have not so clued in people committing changes might need another review. So you can't just leave it there. And when you find the 'bad' ones, you can send a note to the project (which can respond anywhere from "great, thanks for the review!" to "if you don't like it why not send us a pull request with what you think is a better version.")
What this means in practice is that companies that use open source extensively in their operation, become slower and slower to innovate as they are carrying the weight of a thousand different systems of checks on code quality and robustness, which people using closed source will start delivering faster and faster as they effectively partition the review/quality question to the person selling them the software and they focus on their product innovation.
There was an interesting, if unwitting, simulation of this going on inside Google when I left, where people could check-in changes to the code base that would have huge impacts across the company causing other projects to slow to a halt (in terms of their own goals) while they ported to the new way of doing things. In this future world changes, like the recently hotly debated systemd change, will incur costs while the users of the systems stop to re-implement in the new context, and there isn't anything to prevent them from paying this cost again and again. A particularly Machievellan proprietary source vendor might fund programmers to create disruptive changes to expressly inflict such costs on their non-customers.
I know, too tin hat, but it is what I see coming.
Its actually quite clear - most routes are defined by a regex rather than a string, so there is no built-in structure (if there's a way at all) to do O(1) lookups in the routing table. A router that only allowed string route definitions would be faster but far less useful.
I can't explain away the recursion, though. That seems wholly unnecessary.
Edit: Actually, I figured that out, too. You can put middleware in a router so it only runs on certain URL patterns. The only difference between a normal route handler and a middleware function is that a middleware function uses the third argument (an optional callback) and calls it when done to allow the route matcher to continue through the routes array. This can be asynchronous (thus the callback), so the router has to recurse through the routes array instead of looping.
This has me scratching my head. The diagrams are pretty, maybe, but I can't read the process calls from them (the words are truncated because the graphs are too narrow). And I can't see, visually, which calls are repeated. They're stacked, not grouped, and the color palette is quite narrow (color brewer might help here?).
At least, I _can_ imagine how you could characterize this problem without novel eye-candy. Use histograms. Count repeated calls to each method and sort descending. Sampling is only necessary if you've got -- really, truly, got -- big data (which Netflix probably does), but I don't think the author means 'sample' in a statistical sense. It sounds more like 'instrumentation', decorating the function calls to produce additional debugging information. Either way, once you have that, there are various common ways to isolate performance bottlenecks. Few of which probably require visual graphs.
There's also various lesser inefficiencies in the flame graphs: is it useful (non-obvious) that every call is a child of `node`, `node::Start`, `uv_run`, etc.? Vertical real-estate might be put to better use with a log-scale? Etcetera, etc.
Nice, contained way to show data like this.
Possibly worth mentioning, but there's really nothing stopping people from adding dtrace support to Express, it could easily be done with middleware. Switching frameworks seems a little heavy-handed for something that could have been a 20 minute npm module.
> ...also saw that the processs heap size stayed fairly constant at around 1.2 Gb.
This is because 1.2 GB is the max allowed heap size in v8. Increasing beyond this value has no effect.
> ...Its unclear why Express.js chose not to use a constant time data structure like a map to store its handlers.
It it is non-trivial (not possible?) to do this in O(1) for routes that use matching / wildcards, etc. This optimization would only be possible for simple routes.
> What did we learn from this harrowing experience? First, we need to fully understand our dependencies before putting them into production.
Is that the lesson to learn? That scares me, because a) it's impossible, and b) it lengthens the feedback loop, decreasing systemic ability to learn.
The lesson I'd learn from that would be something like "Roll new code out gradually and heavily monitor changes in the performance envelope."
Basically, I think the approach of trying to reduce mean time between failure is self-limiting, because failure is how you learn. I think the right way forward for software is to focus on reducing incident impact and mean time to recovery.
1ms / entry? What is it doing that it's spending 3 million cycles on a single path check?
> our misuse of the Express.js API was the > ultimate root cause of our performance issue
Source: myself - https://github.com/strongloop/express/pull/2237#issuecomment...
"This turned out be caused by a periodic (10/hour) function in our code. The main purpose of this was to refresh our route handlers from an external source. This was implemented by deleting old handlers and adding new ones to the array"
refresh our route handlers from an external source
This is not something that should be done in live process. If you are updating the state of the node, you should be creating a new node and killing the old one.
Aside from hitting a somewhat obvious behavior for messing with the state of express in running process, once you have introduced the idea of programmatically putting state into your running node you have seriously impeded the abiltity to create a stateless fault tolerant distributed system.
I built one for Java: https://github.com/augustl/path-travel-agent
"First, we need to fully understand our dependencies before putting them into production."
* Netflix had a bug in their code.
* But Express.js should throw an error when multiple route handlers are given identical paths.
* Also, Express.js should use a different data structure to store route handlers. EDIT: HN commentors disagree.
* node.js CPU Flame Graphs (http://www.brendangregg.com/blog/2014-09-17/node-flame-graph...) are awesome!
I don't understand the need of refreshing route handlers. Could someone explain they needed to do this, and also why from an external source?
Now that I look at it, there's a TOCTOU bug on the fstat/open callback, too: https://github.com/tj/send/blob/master/index.js#L570-L605
This should be doing open-then-fstat, not stat-then-open.
We really need a 60 fps equivalent for web stuff. You have 16ms, thats it.
In some ways, these engineers are not that different from academic researchers, in that they are devising experiments, verifying techniques, all in the pursuit of the question: why?
> Something was adding the same Express.js provided static route handler 10 times an hour.
Why didn't it increase the heap size? Maybe it was too small to be noticeable?
I couldn't agree with this more. Understanding where time is being spent and where pools etc. are being consumed is critical in these sorts of exercises.
I guess the Netflix situation is one of those that doesn't occur in most common usage; certainly dynamically updating the routes in live processes versus just redeploying the process containers hadn't occurred to me as a way to go.
Give me flask + uwsgi + nginx anyday.
The word you want is "lessons".
What would be the downside? Larger UI elements and less available space on the screen? Could a @2x instead of @3x mode work or would that result in super tiny "bad hidpi" UI?
Hasn't iOS had tools for building resolution-independent apps ever since the iPhone 5 was released?
Except this is not true and I don't know where people are getting this idea. Maybe it's just SV or YC that has money but Angels and VC's in D.C. and NY at least are looking for solid traction before even bringing up the word term sheet.
At a recent Cooley pitch event a friend of mine who is already revenue positive came up from Ohio to pitch his startup. He told me not a single investor had followed up with him about a term sheet despite multiple discussions after the pitch. Another friend in CO is in the same boat, being revenue positive but with no interest from angels or anyone else.
I think stating that anyone can get money is a dangerous thing to say because it gives the wrong impression about availability of dollars. That post about things being frothy is so insanely different than the reality here on the east coast that it's staggering and totally unbelievable (not saying that I don't believe it by the way).
I realize you have to provide advice that might apply to all startups, but in this specific case cutting the burn to 60k (maybe losing a bit of equity to keep employees happy) and trying really hard to hit 60k in revenue and bam, you're suddenly at breakeven and your runway can start to grow, you can breathe, etc.
Raising money is always good, but it's hard, and if you've received a lot of nos, it's not going to get easier. Planning for the acquihire is pretty much admitting defeat.
Some data on how long companies typically wait between Seed and Series A rounds. The median is 349 days so raising for 18 months is smart.
A couple of other notes:
- Might want to look at revenue-based financing. More of a debt instrument but if you can't raise or are getting bent over by equity investors, it is an option.
- I wish the funding is required to grow quick meme would die. Our company is revenue funded and growing at a very good clip. If you can make your customers your de facto "investors", life can be very good.
If they are charging monthly, what about blasting all paying customers with an upgrade to yearly promotion (20% off). That would bring in a lump sum of cash upfront which should provide additional runway.
Either way, even more reason for founders to bootstrap for longer if they haven't hit that huge growth curve yet, or just bootstrap forever!
<a href="42floors.com">office space</a>
You need to make that http://42floors.com or it's not going to work for users (and search engines ;) )
I've seen a startup where they just broke even year after year for the past 6 years. they just increase the revenue for the sake of higher valuation but what ended up happening was, it created a toxic working environment, highest attrition rate (because they simply fire people and replace it), eventually a year came around when they started bleeding. eventually the founding members were fired. now the company is getting outdone by the competition, A list clients have jumped ship, and business is dwindling.
Just for the record, my ads were not clickbaity and were targeted fine, and my magazine didn't suck (IMO at least heh), so if someone clicked on my ad I would pretty much expect it to stay in the page and have at least a look at the cover of the current issue.
Why did I burn $1200 if the ads weren't working from the start? I wish I had figured that out earlier and spend that money on a fancy chair or whatever instead... The thing is that I was only looking at my daily visitors and believed that everything was fine, it was until the end of the month (where I always made some kind of audit thing to see if I was growing or not) where I noticed that the number of new subscribers had remained the same even though FB ads were active all time.
Since then, the only advice I give to friends and clients is "stay away from Facebook ads, it's not what you think they are". And on a small side note, I tried a lot of advertisers at that time and the best experience I had was with StumbleUpon, their referrals converted to subscribers at an incredible ratio (like >50%!!!) and on top of that they drove some extra organic growth even a few months after then campaign was finished, respect for them.
"These ad units are largely purchased by free-to-play game publishers such as King (maker of Candy Crush Saga) and Big Fish Games, which leverage Facebooks incredible demographic data to target the small percentage of players who will spend hundreds of dollars on in-app purchases.
.. So to recount, Facebook is going gangbusters because of ads for free-to-play games, developers are excited about the chance to cash in via Facebook ads, Google and Twitter are trying to mimic Facebooks success, and Google and especially Apple are hanging their app store hats on the amount of revenue generated by in-app purchases.
In other words, billions of dollars in cold hard cash, and 20x that in valuations are ultimately dependent on a relatively small number of people who just cant stop playing Candy Crush Saga."
Many of whom are women, their purchases leveraged up into Valley value. How is that for irony?
Most installs from Facebook Mobile ads never even open the app once (Yes even Once).
If you read the post from the same guy, you will notice he actually targeted to People who play word games on mobile.
For Facebook, this means getting "Likes" on your dedicated Facebook Page (aka https://facebook.com/your_brand_name) for Twitter I'm presuming it's followers of @brandname.
This is separate from the actual paid advertising that the platforms offer - which is likely far more effective, targeted and profitable than say a magazine or newspaper ad.
Running a nail salon and want to reach women 18-45 who live in your city? Facebook lets you target that exactly and clicks to the ads just go to your website.
But, there are a number of reasons this is obvious, not the least of which is that people go to Facebook to socialize, not befriend companies.
So, it always felt like a ruse that brands should encourage their customers to engage on Facebook's turf, as it always seemed that it accrued much more value to Facebook than to the brands. Why do I want someone else owning or even inserted between that relationship with my customers?
Yet companies (regrettably, including my own), would even hold contests, etc., effectively paying to get more likes from customers they already had! Then, Facebook pulled their master-stroke of ratcheting down the reach to all of those fans unless the company paid for it. It really was like some kind of racket or "minor extortion".
Remarkably, they also really began to push the idea of paying them even more to get more fans.
So, send you my current customers by converting them to fans, advertise with you to get more fans, then pay you again to advertise to reach all of them? No thanks. I'd rather get a good old-fashioned email address.
I look at their ad numbers and I just don't get it.
But Twitter is a good tool for PR (because every blogger and reporter uses it), and Facebook can be an efficient content distribution channel.
I think brands have a hard time on social media when they have nothing interesting to say. "Like my page" or "download my app" or "tell us how much you love your toilet paper" are not interesting, and Facebook is doing the right thing to hide that crap from more feeds.
But if you can create good content, you can spend money very efficiently on social media. You just need to boost it a bit over the noise floor, and then folks will share and comment to spread it farther.
 video: http://www.youtube.com/watch?v=DK6KfUsVN8w
 slides: http://bavm2013.splashthat.com/img/events/46439/assets/a10b....
Wondering whether there's any merit to sibling comments speculating this is the future of e.g. surveillance
She was satisfied but apparently you can't bring oranges into Mexico from the USA or vice versa.
Since then I always pack breakfast for early morning flights across the border, but I'm careful to eat it before landing or leave it for the aeromozas to clean up.
The paredit is nice enougth to make working with clojure nicer then working with other language syntaxes.
Overall I really like it because I feel like its the way a IDE should be even if its lacking features, the architecture is nice.
It's great they have the kickstarter money but I haven't seen any announcements on them making this a product for sale. If anything, it appears they are doing the exact OPPOSITE and distancing themselves from the project all together.
I'm on Win8.1 x64
Odd to express narcissism in response to this question.
I have no idea whether NeoVim is going in the right direction but the feeling you get from this response is that Bram is focused on guarding his turf rather than thinking constructively about how to move Vim forward.
I think its a great approach. The reputation system and efficient private search which doesn't require copying the entire database are the hard parts!
He's an impressive guy. I met him when he was going to Berkeley and doing the self-balancing motorcycle for the 2005 Grand Challenge. He'd already done a startup, with a specialized giant laptop for construction sites for people who needed to see blueprints.
The LIDAR Google uses is from Velodyne, which was "Team DAD" in the 2005 Grand Challenge. The first version of their LIDAR fell off their vehicle, but they improved the mechanics and produced that cone-shaped thing Google now uses. That's really a research tool; a different approach is needed for production vehicles. (I still like the Advanced Scientific Concepts flash LIDAR; it's expensive, but that's because it has custom silicon. If you had to get the price down, that's where to start. No moving parts, all electronics.)
I'm kind of disappointed with Google's self-driving effort on the hardware side. I'd expected flash LIDARs, terahertz phased array radars, and other advanced sensors by now. You need to be able to see in all directions, but the requirements to the sides and back are less than for looking ahead. The CMU/Cadillac effort is ahead on the hardware side; their self-driving car has all its sensors integrated into the vehicle so you don't notice them.
(I had an entry in the 2005 Grand Challenge: Team Overbot. Ours was too slow, and we worried about off-road capability too much.)
From then on, we started doing a lot of work with Google, says Majusiak. We did almost all of their hardware integration. They were just doing software. Wed get the cars and develop the controllers, and theyd take it from there.
Anybody who has done robotics knows there is a lot of integration involved (hardware and software), and that doing integration is hard and tends to be thankless. It's nice to see some in-depth reporting in a major publication on the full depth of the engineering team.
So,who is Suzanna Musick?
Self driving car technologies have has actively developed by almost every car company for years now. Many of the beginnings of this work has already made it to market e.g. Parallel Park Assist, Auto Emergency Breaking, Lane Merge Detection, Adaptive Cruise Control. And companies like Volvo are already testing their self driving cars in real world, difficult conditions in Sweden. And because there are only a few car conglomerates they will simply share technology within each group.
So what is their end game ?
There's 350,032 unique passwords in there.
* 122,094 (~35%) are in the rockyou dump (which has 14,344,391 unique entries)* 2898 passwords in my list of cracked linkedin passwords, excluding those in the rockyou dump (2,002,484 unique entries)* 27,639 are in the phpbb dump i have (184,344 unique entries)
'hunter2' is in the list.
However, queues can "fix" an overload in one sense by making an engineering tradeoff of increased latency and additional complexity (e.g. SPOF). Peak capacity handling didn't increase but overall server utilization can be maximized because jobs waiting in a queue will eventually soak up resources running at less than 100% capacity.
If response time (or job completion time) remains a fixed engineering constraint, queues will not magically solve that.
Using ONLY a queue to "fix" a bottleneck can serve to buffer your input, but will still fail when your sustained input is greater than your sustained output.
I feel like this is pretty common-sense to most people, that the only way to fix a bottleneck is to widen the neck. If people are running into this situation in the real world, they either don't understand their core problem or are blocked from actually fixing or measuring it.
The problem stated by the author doesn't have really anything to do with queues. A queue is a tool that someone, quite sensibly, might use as part of a solution to widen a bottleneck, but obviously it can't be the entire solution.
A queue has a perk of smoothing out the peaks which may give the illusion of fixing overload, but really you haven't added any capacity, only latency. Also latency is sometimes a reasonable thing to add if it allows higher server utilisation rates.
As an aside, I really enjoy working with AWS' SQS queues, as they allow you to define a maximum number of reads + a redrive policy. So you can throw, for example, items in a "dead messages" queue if they were processed unsuccessfully 2 times. We use this to replay data, stick it almost immediately in our unit tests, and improve software incrementally this way.
The question is what is the business requirement and guarantee regarding the message delivery. In my experience with logging systems is that losing a very small percentage of your data in case there is an outage in the storage layer is tolerable.
That allows us writing code that sends or receives messages using a channel (Go, Clojure) and have sane timeouts to gracefully deal with overload, saving 200 (or so) messages in a local buffer and going back to reading the queue after a read timeout.
With this concurrency model queues are very powerful tool of separating concerns in your code. Having thread safe parts that can be executed as a thread or as a gorutine lets you use 1:1 mapping to OS threads or N:M.
Back to the original point, queues don't fix overload, but I still prefer deal with overload using queues over other solutions (locking or whatever).
Anyone reading this may be interested in the related concept of the 'circuit breaker' technique employed by things such as Hystrix (Netflix), to avoid getting into this state.
Not a solution per se, but the simple philosophy is to set yourself up to fail fast: when back pressure reaches some threshold the 'circuit opens' and your servers immediately respond to new requests with an error, instead of ingesting huge numbers of requests which flood your internal queues or overload your infrastructure.
OT: The hardest part I have found with queue design is task cancellation. How does one 'cancel' tasks that are already in the queue or being processed? I haven't come across a good framework that solves this cleanly. For example, if I queue up a task X in the queue. The task now need to be 'cancelled'. How can I ensure this? Looks like I need some sort of messaging bus?
This article could have been "X Don't Fix Overload". If you're not looking for, and fixing, the actual bottleneck, then any improperly implemented solution could be in that title.
With that being said, there is good information in there. I just didn't agree with the strong bias against queues.
Another nice illustration of how when arrival rate is greater than departure rate we get overflowing queues:
TLDR: Don't build queues for non-queue friendly processing.
Yes, if things further down your stack are not capable of handling the load in decent time your queue is going to overflow (assuming a fixed capacity). No, it does not make your entire stack faster -- it just defers processing in a way you can manage and tame it. What can become faster is things like front-end requests, which are no longer held up by blocking operations or a starved CPU. Either way, it buys you some time to actually re-engineer your stack to work faster and at greater scale.
eg.On a webserver where people get refresh happy, I'd set the timeout at about 5 seconds. If requests are taking on average 2 seconds and the request is already 3 seconds old, return a 500, drop the connection, etc. Then clear the rest of the stack.
Answer requests in a LIFO manner as at least that way in an overload condition some requests are answered and the 'queue' is cleared quickly.
It's like when you have 3 papers due and only time to do 2, you elect to not do one so you can do the other two. You should ideally kill the one that is due first.
Sacrifice a few for the good of the many.
chunk off the overloaded queue into small groups and then have your workers chew a bit of each at a time. the LAST-IN folks won't have to wait ages for FIRST-IN folks to finish.
off topic but parisian french sounds way nicer than quebecois, the language of blue collar workers from 18th century, had to unlearn the horrible quebecois after high school.
If it turns out the body can recruit white matter in learning then suddenly we have 10-50 times more cells (white matter/grey matter ratio) that can participate in intelligence. I suspect the way intelligence is organized would also differ between the white and grey regions. Not to mention how they interact with each other!. It calls into question a lot of the assumptions computational scientists make in coming up with the complexity of a simulated brain. We might be at the start of understanding how truly complex the brain is.
A good overview of this understudied portion of the brain is the the book "the other brain" by douglas fields.
The basic theory is that young brains soak up experience, while older brains consolidate it. So older brains can make bigger leaps of logic -- the old cliche about "wiser" old people actual does have a biological basis. I wonder if this new study is just another piece of the same phenomenon. (New memory is stored in a different part of the brain because the old plastic/learning-storage centers have already been optimized and compressed...)
I've been trying to replicate this work. So far haven't succeeded. The code for the system described in the article isn't available. Only a "newer" version is available. In either case, both implementations will behave differently depending on how quickly they execute, so reproducing the described results is proving tricky. Also the motor output program isn't available, nor well defined.
When next I get time, I planned on posting to the Google Group for this project and see if they are willing to enlighten. I re-implemented the code as a single C program, and re-architected it to use a synchronous tick, moving from one state to the next. So my program has re-producible results, regardless of speed. It just doesn't appear to behave correctly yet. More tinkering...
"Eisenstein playfully hyped the virtues of the "dynamic square," a screen that was exactly as high as it was wide. He did so in part because to him the square was modern, charged with productive machine force. This more purely cinematic screen was, according to Eisenstein, necessary for properly showcasing the energies, conflicts, and collisions germane to the moving image arts. It would also, at least in theory, be the most accommodating frame, capable of hosting images composed for planes that were either horizontal or vertical. Eisenstein proclaimed that previous industry standards (4:3), as well as contemporaneous calls for wider screens, were nostalgic, calling forth a dated viewing regime dictated by traditional art forms."
I don't know if a completely square monitor would be as well received as something at least slightly rectangular - 1920x1440 (4:3) might be a good compromise.
Reading logs or browsing code is quite nice with the vertical screen, as is reading some vertical oriented pdf materials.Cat videos are usually watched on the horizontal screen. :)
Of course, many monitors are worse. A 1920x1080 27" monitor is only 82 pixels per inch!
The monitor I'm buying next is probably the Dell UP2414Q. With 3840x2160 resolution on a 23.8" panel, it has 185 pixels per inch. It's expensive and you need a machine that can drive it properly, but that is a nice pixel density.
My dual monitor setup has one landscape, one portrait, and I'm never going back. Viewing documents & code is just so much more convenient.
However, what would be interesting is if this was priced at normal monitor prices ... the ATC monitors are incredibly expensive.
Hoping for a power of 2 square model.
When viewed through these lens, many of the seemingly ancillary Google business units start to make strategic sense. Android (control the device), Chrome (control the browser), Fiber (control the tubes).
Each of these channels is an opportunity for disruption by some competitor search engine and Google wants to make sure they don't get blindsided. Or one of the gateways could demand a massive tribute for Google to pass through (cable companies are pushing for this via the war against net neutrality).
If Google didn't have Chrome and Firefox was the leading browser, they'd be in big trouble with this news. Lucky for them they thought about this a long time ago and built a browser which now accounts for 50% of market share.
Yahoo NEEDS this deal. For Google, it's a nice to have.
* This is a new, more flexible partnership strategy.
* Continuing the existing relationship with Google was an option, but Mozilla chose to end the Google relationship.
* All the options Mozilla considered had strong, improved economic terms (but the concrete numbers are not public). Because all the options had improved economics, that allowed Mozilla to really consider the strategic outlook.
* The Yahoo agreement in the US is for five years.
* Yahoo will be rolling out a new, improved search tool soon.
* Mozilla has agreements with Yandex and Baidu for Russia and China.
* Google will remain an included option in Firefox and Mozilla will continue to support its use.
Yahoo is angling to be a digital magazine, which I like as a business model much more than Google's.
Firefox is making a strong case for itself as the privacy centric browser.
I still remember when Firefox started gaining market share. Even non-tech-savvy people were getting firefox, because IE was so bad for security, and so hard to maintain.
Excited to see what the next few months brings.
As a Firefox user, all in all I'm rather pleased. I've just tried a few of my typical searches on Yahoo and though the expected links aren't the top results (seeing links for Rust-the-game instead of Rust-the-language...), they're on the first page. Let's see if that improves with time as I use it more. And I'm happy to support some more diversity in this space.
While I believe the party line is probably true ("Mozilla decided not to go with Google"), it's also disingenuous.
Mozilla chose not to go with Google because Google wasn't willing to pay what they were before. They straight up told Mozilla this 3 years ago when they signed the billion dollar contract; Mozilla had 3 years to become profitable. That's why they switched focus to FirefoxOS; they thought that by now they'd be profitable via selling phones and the app store. (At the time, Bing was bidding against Google, however Mozilla went with the smaller check from Google because they knew using Bing would seem like selling out.)
For the record, Google made billions off being Firefox's default search engine. They paid Firefox $300mil a year for three years, but that was only a small fraction of how much Google profited from Firefox searches. Not sure if it's still true, but three years ago they made more from Firefox than they did from Chrome.
So, yes, Mozilla could have gone with Google still. It's not like Google said "nope, you can't use us as the default!". However, they went with Yahoo! because Google wasn't willing to pay what Mozilla needed. The whole "Mozilla picked Yahoo! to enable choice" has been tweeted by every Mozillian I know (and said multiple times in this thread), but it's a meaningless statement. If they really meant that, you'd be prompted when you opened Firefox the first time to pick a search engine.
Edit: I mean to say did Google decide to end it, or did Mozilla, and why?
It's disappointing to hear that Firefox isn't yet using Mozilla's location services project .
Background: These are the services that will, say, take the SSID of your current WiFI access point and map that to a latitude/longitude. My understanding is that almost all commercial users subscribe to Skyhook Wireless's database, other than Google, which has built its own WiFi AP maps using its StreetView trucks.
I think Mozilla's "open" service, contributed by individual users, is a welcome alternative, since it means you no longer have to send your location to a large corporation on every look-up.
More details, Yandex in Russia and Baidu in China, etc.
If you are making a browser which is focused on giving freedom to users you are supposed to :
1. Either let the users chose the search engine as an on-boarding step. 2. Offer industry best/leader as default.
In this particular case Firefox has made a suboptimal choice on our behalf in the name of "choice".
How exactly is this different from :
1. Comcast taking more money from Netflix to give them better bandwidth ?
Now my grandmother will end up seeing 0 organic search results above the fold and will have to learn to either change the search settings or simply use that icon with Red Green and Yellow around a blue dot (Chrome).
- I hope this strategic partnership does not mean 0 organic search results above the fold. That is what Yahoo is doing at the moment. - I hope FF does not come up with any Yahoo spyware/Toolbars etc.
Search is a weird thing on internet.
Seems like a bad decision for Mozilla.
It was a different story in 2011 when Woz went on CNBC and said that the company "has grown as fast as Apple so far"  pushing the stock to the highest point it would ever reach just a month before the share lock-up expiration .
That's not a very big market.
1) Map out your interface and interaction trees first
1-click - most common actions
2-clicks - second most common actions
3-clicks - power user level stuff
Put the most commonly used stuff at 1-click or interaction. If you don't know what goes at 2 and 3 clicks in, you don't understand how the application is used, because you don't understand what the most common interactions are. If you've run out of room for the 1-click stuff in your UI, then your UI concept is poorly designed. Keep iterating and collecting information until you can fulfill this.
Don't put anything at more than 3 clicks in.
2) Double the number of interaction points in the UI. Assume the application will grow and add features. If you optimize your design for the number of features you have today, you'll have no where to put all the stuff you're going to get over the application lifetime and it'll all just end up getting buried in menus. I've seen lots of gorgeously, carefully, designed applications die a year in because of this.
Double everything and see if that number of interaction points still fits within your concept, that way the interface has room to grow without getting messy.
3) Don't make your users interpret, make them understand.
If your concerned about how universally an icon is interpreted across cultures, you're doing it wrong. Interpretation is an additional step your users have to go through to use your UI, it's like putting everything at 2, 3 and 4 clicks in because they now have to not only look and scan the UI for what they want, they need to figure out what each interface item means before they can interact with it.
Even worse, as they grow to become accustomed to your UI, they're going to end up memorizing location and placement of options because the interface widgets take too long to interpret. Get 2 revisions down the road and you move a button and wham your tech support calls jump 50% because the users never bothered to remember what the symbol for their action looked like, just where it was on the screen.
4) Everything must be discoverable. This is why the world moved to GUIs from CLIs. Don't make your users play a 1990's era adventure game where they have to click every pixel on the screen to see if they can advance their usage. The Flat UI trend is notorious for this.
5) Consistency rules. Also see #3.
6) Eliminate Steps. Map out how many steps certain actions are. Cut them down to as few as possible. I remember one time going through a file import process with a tool, by the time you got the file imported the user had to navigate 27 different steps! Almost every step required minimal or no user input. Nobody had ever bothered to map out the interaction patterns in the tool before but users were constantly complaining about how difficult it was to use.
We reworked the workflow and got it down to 3 steps and user-engagement jumped triple digits.
6) After you've addressed 1-6, make it look nice.
Sure, the first thing you'll hear to that issue is "Do you really need to have that many options/paths/data?". And granted, quite often this is applicable, although not always in the same way (hiding rarely used options vs. eliminating them, i.e. "advanced options" vs. "only one friggin mouse button").
But often enough, presenting lots of data and hierarchies is the whole point of an app, especially when it gets more about enterprise systems than "what pancakes do my friends like" web 2.0 frippery. And that's where the ideas coming from ad design and typography kinda fail.
Which is why people like Tufte are so respected, as they go beyond this. If I recall correctly, in the initial review of the iPhone Tufte recommended against even the minimal margins of the photo gallery, removing white space for a better experience. And yes, knowing the rules before you break them might be a part of that...
If you don't do this as your full-time job, I'd very much recommend going for "usable" instead of "gorgeous". The latter is very much a 80/20 deal, where you spend insane amounts of time, asking co-workers and A/B tests just to get that final ratio or pixel size right. Whereas most of your customers still have Nappa Valley as a background picture behind their copy of IE9...
I don't really miss under construction signs and rotating skulls, but I do have the slight feeling that a lot of what designers are doing will be like early 20th century typography in a few years, where even some of its major proponents aren't quite sure about it anymore...
There's an annoying trend in UI design to make the simple stuff look really good, while making more difficult operations harder. If you think your UI concept is great, try mocking up something like Photoshop or a 3D drawing program. Those have really hard UI problems to solve. The mania for "clean design" has resulted in such things as invisible close buttons that only appear when you mouse over them. (Facebook ads work like that.)
Bob Lutz, who used to be head of General Motors, ran into this. His designers had built a concept dashboard which looked like something from Bang and Olufsen, with the black-on-black design popular in the 1980s. It looked really great. Nobody could operate the controls reliably without training or a manual.
There was a brief period when creative user interfaces on web pages got completely out of hand. Check out
for an over the top example from a French fashion design house. They went bankrupt a year after putting up that page.
And they make them completely unusable to anyone with less than a 27" screen, but all the hip designers apparently don't care about users who have anything less than a 2560*1440 display just like them (1024x768 is still the norm for a lot of people outside of the Silicon Valley bubble).
I've seen so many web products that are less usable than 20 year old command line interfaces, notably because of this "you can never have too much whitespace!" mentality, it's appalling.
The first goal of an interface is to be used.
Agreed, the "flat design" trend seems like a pushback against over-done skeuomorphism that went a little too far in removing all the visual depth cues.
I dislike this attitude, for me it is very reminiscent of the way people just shut down when math comes up. "I hate math" and that's it. I'm not a math person, my brain doesn't work that way. On and on. There is even a perverse sort of pride in it. Why not do the work, why not try to get better, why not try to expand into things we're not good at yet.
If devs only started by lining things up and thinking in terms of visual hierarchy, they'd already be 90% there.
I guess whoever did the pull-quotes was also an engineering-major...
NoFlo, Origami, RelativeWave are all offering this visual programming, but any example I saw was becoming too complicated to follow before it was useful.
I have never used either, but both look so much better than working off static PSD comps. I would be curious if a person who has used both can chime in on their impressions.
They design/build their networking gear, full hw/sw stack. This is cheaper and more reliable (their code is simple/customized to their datacenter use case.)
They also have SingleRoot I/O virtualization at each server: each guest VM gets its own hardware virtualized link, which is great for reducing the giant tail at scale problem (google for Jeff Dean's description of the problem.)
Their relational DB system RDS is getting popular: 40% of customers using them. So they compete with Oracle by offering similar highly-available service with much less price. They keep adding new relational DBs: Aurora, RedShift, EBS.
They design/build their power infrastructure. Faster.
They are very customer oriented, they make things simple/painless for customer use cases. They are obsessed with metrics, measuring everything, with tight feedback loops to improve things weekly. They rolled 449 new services + major features in 2014 alone.
Otherwise, a function might accidentally return None and cause a bug. For instance, a function that looks up s user and returns the name. The name might have null as a valid option. Some(null) indicates user found with no name. None indicates user not found.
As an aside, I recognised your name in the example code on there - I used to occasionally hang around on 4four :)
- There is an x for which Some(x) is not a Some. This violates useful invariants.- Some((int?)1) is a Some, but Some((object)(int?)1) is a None.- optionValue.Cast<object>().Cast<T>() is not the identity function.- I can't use your option type in existing code without doing careful null analysis.- As a rule of thumb, generic code you should treat null as just another instance of T with no special treatment (beyond the acrobatics to make the code not throw an exception). That way both users enforcing a non-null constraint and users allowing nulls can use your type.
While I do have an Option type with all the relevant LINQ operators defined, I've resorted to using Nullable<T> for all string parsing functions that return primitives as Nullable is a type that's actually supported databases. For any nullable, I can use a nullableVar.DefaultTo(defaultValue) to convert it to a non-nullable primitive type. Personally, I don't find Option types in C# all that compelling because the pattern matching mechanism doesn't compel you to handle all cases like F#.
All in all, it looks like an interesting library.
Thanks for posting!
This sort of library is kind of necessary I feel for things like poor tuple support at least.
I'm not entirely sure about the casing conventions. I mostly code in clojure/haskell/f# but 'when in rome'. Its likely the sort of thing that will turn off a lot of stubborn developers.
If this is just a fun project, scratch what I said, I love building useless stuff myself, but if it is intended to be used seriously I don't get the point.
I am assuming it is minimal, because the struct would remain on the stack, and the object on the heap. There is just an unwrapping and wrapping cost but no more than Nullable? However this may be a naive view.
I like the syntax though. Where I work we use Code Contracts. This reduces bugs due to nulls but sometimes 25% of the code is Contract.XXX(...) which is annoying to read. And more typing too.
var ab = tuple("a","b");
awful. It's very hard to read what is going on here- someone else reading your code would be utterly confused. The method name starts with a lowercase character, and I'm not sure I like the possibility to omit class names for static classes (granted this is not your invention but a new C# feature right?); to me it would seem it is missing the 'new' keyword.
I'd rather use Tuple.Create<T1,T2>(T1 first, T2 second) until they add proper support for:
var ab = ("a","b");
My guess is they are using one of the following placements:
Accelerated Learning (DARPA) F10/Left Arm - http://tdcsplacements.com/placements/accelerated-learning/
"Savant Learning" (Chi & Snider (2011)) T4/T3 - https://www.reddit.com/r/tDCS/comments/2e7idx/simple_montage...
at the very least, they should save the images and targets to use as training data, since that's being generated manually already. then they could see how predictive of a model could be generated, instead of just guessing that it would be bad.
Knowing the U.S. military, rather than addressing the root cause of the issue (namely: the totally senseless cult of sleep deprivation in the armed forces -- despite the ample research showing the mental and physical damage it causes), they'll start offering, what shall we call them? -- special "performance-enhancing" helmets. First on an optional basis, but then on a not-so-optional basis -- to administer optimally measured voltage, at optimally timed occasions.
From there it's a short hop to having these helmets (by then no longer optional at all) administer other kinds of signals, directly to the soldier's brains: to relay orders, identify targets... and to tell them when to pull the trigger.
"Yet", my friends, "yet".
I love reading quotes like that because it speaks to pure opportunity. Someone will eventually figure out an algorithmic solution to X, and that should remind us all how wrong the "all the good ideas have been done" line of thinking really is.
I think the reason were getting these long-term effects is they are making some longer-lasting changes to the neural connections,
Here's what I want:
- Easy sharding, a la Elasticsearch. I want virtual shards that can be moved node to node and an easy to understand primary/replica shard system for write/reads. I want my DB nodes to find each other with an easy discovery system with plugins for AWS/Azure/Digital Ocean etc.
- Fucking SQL. I don't want to learn your stupid DSL. I want to give coworkers a SQL client a say "go! You already know how to use this!". If I want a new feature, then dammit, build on top of SQL the way PostgreSQL has. Odds are, regardless if its some JSON API or SQL, my language will have a client for it that will be superior than writing raw queries anyway.
- Easily pluggable data management systems. For example, if I do a lot of SUMs and I know I'm not doing writes very often, I want to use CStore. If I'm storing a bunch of strings, I want to able to index it anyway I please - maybe one index with Analyzer/Tokenizer X and another with Analyzer/Tokenizer Y - all in a nice inverted index. Good, I can make an autocomplete now. Oh, and sometimes I want a good ol' RDBMS.
- Reactive programming! It works well in the front end and it'd be amazing in the backend. For example, I want to make a materialized view that's the result of a query, but that gets updated as new rows get inserted or as the rows it uses gets updated. Let's call it a continuous view or something. Eventual consistency is fine. Clever continuous views can solve a lot of performance issues.
- I want to be able to choose if a table/db is always in memory or not. I don't care about individual rows - that sounds like someone else's problem.
- Easy pipelining - these continuous views mean that an insert can span a lot of jobs because one continuous view can be dependent on another. I want my database to manage all of this for me and I want to forget that Hadoop ever existed. I want to be able to give my database a bunch of nodes that are just for working jobs if need be. Maybe allow custom throttling for the updates of these "continuous views" so the queries don't get re-run every update if they're too frequent.
- While I'm at it, I want a pony, too. But I'd settle for this being open source instead.
There's a lot of possible directions for the DB world in the next decade. Me, I think the line between DBs and MapReduce/ETL/Pipelining is going to be blurred.
- In-memory databases offer few advantages over a disk-backed database with a properly designed I/O scheduler. In-memory databases are generally only faster if the disk-backed database uses mmap() for cache replacement or similarly terrible I/O scheduling. The big advantage of in-memory databases is that you avoid the enormously complicated implementation task of writing a good I/O scheduler and disk cache. For the user, there is little performance difference for a given workload on a given piece of server hardware.
- Data structure and algorithms have long existed for supercomputing applications that are very effective at exploiting cache and RAM locality. Most supercomputing applications are actually bottlenecked by memory bandwidth (not compute). Few databases do things this way -- it is a bit outside the evolutionary history of database internals -- because few database designers have experience optimizing for memory bandwidth. This is one of the reasons that some disk-backed databases like SpaceCurve have much higher throughput than in-memory databases: excellent I/O scheduling (no I/O bottlenecks) and memory bandwidth optimized internals (higher throughput of what is in cache).
The trend in database engines is highly pipelined execution paths within a single thread with almost no coordination or interactions between threads. If you look at codes that are designed to optimize memory bandwidth, this is the way they are designed. No context switching and virtually no shared data structures. Properly implemented, you can easily saturate both sides of a 10GbE NIC on a modest server simultaneously for many database workloads.
Creates a red herring by stating he's been doing this a long time and has seen it all.
Creates straw man after straw man in the trashing of memory caches (avoids their use cases), Dynamo (there's a good reason tons of people use various NoSQL Databases) and Hadoop (C'mon, now).
He also creates more logical fallacy in calling various concepts silver bullets that ended up having problems. I don't think anyone serious about technology thinks replication, sharding, load balancing "solves everything". Nothing is a silver bullet and anyone who says something is is selling you something...
And then he fails to really address the MemSQL uses replication, sharding (in a limited sense since the core SQL concept of a JOIN is wrecked here and they have a big warning on their troubleshooting page about an error you users must see often).
SQL is great but I have plenty of great reasons to use other data stores. SQL isn't a silver bullet for data.
Point is, he is calling MemSQL a silver bullet and is obviously trying to sell something while ripping plenty of great ideas and concepts by picking the worst implementations of them and largest misunderstandings of them.
"Bringing you yesterday's insights, TOMORROW"
Am I missing something, or should it read "hard disk" rather than "integrated circuit" here?
Small code & data fit in cache, and run full speed. Fortunately, I can get at the GB that used to be (mainly) on my hard drive faster, now.
OTOH, databases are only one component of modern architectures, which the article correctly asserts are largely limited in terms of scalability by throughput and latency. However, scalability is often secondary to functionality. And in terms of functionality, the long list of database types trawled out through the article only serve to highlight the real chokepoint: cognitive overhead.
Perhaps what we really need are tools that enable us to more easily stop and think about the problem. Ideally, tools to test, profile, compare and switch between storage or other subsystem architectures without having to delve in to infinitesimal intracacies of each.
Success really depends on the conception of the problem, the design of the system, not in the details of how it's coded. - Leslie Lamport
Suggestion #1: Describe what we get for the paid "Pro" version. Atpresent, your site only says there's a "Free" version and a "Pro"version, but does not differentiate between the two.
(Note To Self: Do I really want the "Professional" version of a drinkingapp on my phone? -- Hmmm.... Decisions, Decisions, ;-)
Suggestion #2: Give a bit more information about the ratings andreviews, like where they come from.
Suggestion #3: I know there's a craft brewers association of some sortin the US (I saw it in a documentary I watched a while ago). It might bea useful source of data, particularly for the more esoteric, seasonal,and limited run, brews. I think the following is the group site:
One critique: your website doesn't explain the difference between Free and Pro. I had to go to the App Store to find that info, which took a lot longer.
One suggestion: add other dimensions for sorting besides bitterness. There are a lot of things other than bitterness that distinguish different styles.
One feature request: Let me track my own ratings and view them later. This is the first beer app that feels fast enough to use for tracking my own beer ratings. I love tracking books I've read in Goodreads because it makes it easy to find them again. I want to do the same thing for beer, but the apps I've tried (Pintly and BeerAdvocate) have been painfully slow and hard to use.
Very excited that I can stop dreaming of it existing and use your version!
As I was about to submit the comment, it finally finished processing, returning a list of beers that weren't in the original menu image. Not surprised, given the poor quality inherent to taking a picture of a low-res picture on a low-density LCD screen, so the only issue I see here is how long it took to process the image.
Great idealooking forward to trying it at the pub tonight (hopefully I'll have better success there).
Oh, and if they are bottles the normal ml in the bottle and the ABV and a score saying best buy for your buck :p
Thanks for making this!
* heads off to the bar
Are you blending user ratings with the ratebeer ratings in screenshots, or keeping them separate?
Wineglass has a feature I thought was pretty neat -- letting you know whether or not the price was 'fair for a restaurant' given typical industry markups. Perhaps not as applicable to beer, but could be a cool feature. And then you could surface 'bars with the best deals on beers you'll love.'
I'm in online wine media, so not as familiar with the beer space but it seems like lots of areas for collaboration (eg Nextglass, Untapped, BeerMenus.com as a fallback for OCR fails)
P.S. I understand the need to monetize, but having used heavily/played with dozens of apps in the wine/beer/liquor space, both free and paid, its rare to see random iAds (so far Target.com, some casino game install ad, and another casino game install ad. Perhaps the revenue is worth it but feels like there are much more interesting ways to monetize (native ads in terms of featured beers/all sorts of brewery/bar partnerships) than that sort of junky ad...
- Would love the ability to save a collection of favorites, maybe add folders or tagging?
- I often get asked to create list of recommended craft beers and so would like the ability to share these lists easily
- Your data will be sparse to start with, but I'd be interested in knowing how many other users put beers into their favorites (I find the BeerAdvocate listings a bit tedious to wade through...)
- Not sure you want to, but the professional brewers I speak to are interested in a lower-cost alternative to Untappd  and you might be able to build a business here?
- Maybe this is an East Coast US thing, but there's a growing trend to pair beers with cheese ; maybe that's too specific a request but allowing used to add notes to the beers (meta data beyond BeerAdvocate) might be useful
 https://untappd.com/business http://www.huffingtonpost.com/2014/02/25/beer-cheese-pairing...
Some were tricky but some were things that you definitely should have (like Sam Adams Winter). This will be really great when it's more complete but there is still some work to do
Regardless of whether or not you're using RubyMotion, would you like to share any comments or experiences about developing and releasing the app for both iOS and Android at the same time? I think it's remarkable, it seems like people pick either iOS or Android to launch. A lot of small iOS shops don't even build their Android versions in-house, they will contract the Android version out to an Android firm.
Great work on this app, I love it! Will try it out in the real world later today.
You will be collecting data that will be useful to other people besides me. I don't care if statistics on all app users are sold but if you want to sell things based on my specific behavior I won't use your app.
All that said I'm excited to give it a try.
Great idea, good luck!
How did you get the use of RateBeers API?
Do you have plans to utilize the data for anything else? That could scare me and excite me at the same time.
There's a few comments here about using native GUI widgets and such but I think that's a tricky approach. Dealing with long-running state of a native app seems very un-PHP. I consider myself fairly experienced with PHP but I'm not familiar with creating background threads with PHP which you'd need for a responsive GUI.
Much rather prefer using node-webkit for this kind of desktop app (note: PHP is my go-to language)
Oh, yeah, about the submission title, I couldn't find anything that described it better, even after a lot of head scratching. No, I'm not eloquent, I know... ;)
I am no fan of PHP myself and for this kind of requirement I would use node-webkit which provides seamless integration of the webview and host JS (node.js)
But the most important thing here is that it opens the doors to desktop programming to PHP devs and therefore we may see more useful apps for the desktop being developed, previously some PHP devs may have had some interesting concepts for the desktop but they weren't able to make them happen due to their lack of skills or time to learn GUI programming.
 http://winbinder.org/ (inactive since '10)
 http://wxphp.org/ (active)
No, thank you.
More specifically, I spent last Saturday setting up PocketMine (a Minecraft server) for my kids. This remarkable piece of engineering is a console app done completely in PHP and one of the first things it logged was "Can't keep up! Is the server overloaded?". That's a completely idle server on a beefy box. And all I could think was how regrettable it was that a clearly capable programmer voluntarily painted himself in a corner by picking a language that wasn't fit for the job. Same thing with GUI in PHP - yes, it's doable, yes, there's probably a demand for it, but this demand is misplaced and misguided and it's not worth of endorsing. It's like giving devs a heavier weight to sink deeper into a tar pit instead of giving them a rope for getting out. Desktop apps should never ever be written in PHP unless it's some sort of quick and disposable hack, which is likely not what this project has in mind.