After all, most people on mobile spend their time inside apps, probably from some Google competitor like Facebook. Within these apps, they click on links, which increasingly load inside webviews; the framing app collects info on where people go, and uses this to sell targeted advertising. Facebook is a king in this space, and is now the second largest server of internet display ads, after Google.
Google's assault on Facebook's encroachment is twofold: drive people to Google's apps like the Google Now Launcher (now the default launcher on Android) or the Google app present in older versions of Android and available for iOS, and deploy the same content-framing techniques from their own search engine webpage on mobile user-agents, where the competition is most fierce, and they can also position it as legitimate UX improvement -- which, to their credit, is largely true, as bigpub content sites on mobile were usually usability nightmares and cesspits of ads.
I understand that the author and quite a few others are peeved at this behavior and that there's no way of turning it off. But it's really not in Google's best interest to even offer the option, because then many people will just turn it off, encouraged by articles like the author's own last year where he was caught off-guard and before he gained a more nuanced appreciation for what's really going on.
The bottom line is this: Google is inseparable from its ad-serving and adtech business -- it is after all how they make most of their money -- so if you are bothered by their attempts to safeguard their income stream from competitors who have a much easier time curating their own walled garden, you should cease using Google Search on Mobile. There are other alternatives, who may not be as thorough at search, but that's the cost of the tradeoff.
Here's something simpler from a non-developer, average-consumer point of view. I recently began taking BART to work daily (new job). For those who don't know, BART is Bay Area's subway system, and (at least on the east bay side) cell reception is notoriously spotty.
When I'm on the train, which includes 2 hours of my day everyday (unfortunately), I'd be browsing on say Facebook, and look at links that my friends post. Instant articles almost always load successfully (and quickly) and external links to actual sites almost always fails to load or loads insanely slowly.
Yes, when you're at home or in the city with good mobile reception, these things make no sense and you'd rather hit the original site directly. Give them their ad revenue, etc. to support them, right. But for the average consumers who actually have problems like slow internet (like the average joe who rides public transportation and wants to read on their phone), things like AMP and Instant Articles actually help. I can only imagine outside of silicon valley (where I live), how much more significant of a problem slow internet/slow mobile data actually is.
P.S. I don't work at Google or Facebook, and I know this sounds like propaganda, not to mention this is exactly what they would like to tell you as the "selling points" of these features, in order to continue building their walled garden empires. Fully aware of it, but I did want to bring up why they exist and why I even actually like them.
1. Obscures the web page's URL.
2. Makes manual zoom in/out impossible.
3. Sometimes hides content mentioned in the article, with no ability to scroll horizontally to see it.
4. Confuses Chrome on Amdroid into over-hiding its top address/menu bar (forcing two swipes down all the way to the top to show) or forces it to show (won't hide on scroll down).
This is just coming from a user's perspective, fortunately it doesn't impact my work, but may in future websites I build due to it being almost 100% of the news articles I read.
I dont know why I do it, but for some reason it just doesnt feel right to me to consume the content through the AMP. It feels slightly off, and I want the real deal even if it takes a few seconds extra to load."
I have subconsciously been doing the exact same thing for a while now, and I think this quote covers a good deal of public sentiment. It's weird to use AMP, yet slower without it.
Another main issue I have with AMP is that there is no speedy way to check the url, something I do quite frequently. Instead it's just Google's hosting for the site, with the source being only available by clicking on the link icon.
At the risk of sounding like an old fart (I probably do), I fail to understand this frustration of normal mobile users with the so-called slowness of their mobile experience. To quote CK Lewis: "Give it a second! Its going to space! Can you give it a second to get back from space!??"
The speed difference on SERPs is the background downloading and (possibly) pre rendering of AMP pages. This functionality could easily be added to browsers, keeping people on their own websites and Google not having control over the content.
We already have <link rel="preload/prefetch"> but how about adding <link rel="prerender" href="http://amp.newswebsite.com/article/etc." />.
This would absolutely give all of the benefits of AMP Cache without Google embracing and extending the web. It's also much simpler to integrate, every single site can choose to benefit from this (not just SERPs) and I don't end up accidentally sending AMP Cache urls to my friends on mobile.
The AMP saga has pretty clearly shown that users care about content while Web developers only care about URLs and what goes over the wire. This is a huge disconnect. It doesn't help that many Web developers show no empathy for the users' viewpoint.
Ultimately it probably is easier for Google to add an opt-out to appease a very small, very vocal minority than to educate them that the URL doesn't matter.
There are extensions like No Script that can give similar results for other browsers. https://noscript.net/
Marketing has taken the lead in corporate websites projects to the detriment of the end-users, AMP puts the user in the center.
Although there is much to be concerned about Google's ever-expanding reach into the daily life of a good portion of the planet, I think web proponents have more to fear from the likes of FB, Apple, and others appearing on the horizon. These companies are mostly succeeding at meeting current UX expectations (performance, standardization, ease-of-use), and in doing so they are capturing eyeballs away from the web. It's possible some of those who have left for these walled gardens may not return.
google amp pages google amp annoying google amp sucks google amp conference
test cache disable maps
This simple test is therefore inconclusive, but my hypothesis is that his search autocomplete hints are, ironically, colored by his search history. The only negative word I got (disabled) is much more neutral.
Now that I think about it, duckduckgo's "no tracking" isn't just valuable for privacy. It's also valuable for consistent search results across computers without yielding even more information (logging in etc). A few times I made a query and found something useful and surprising, and then I wasn't able to replicate the query on another computer to show someone else. In any case I'd hate to miss a rare interesting page because Google thought that extra 10 pages about Linux might interest me more.
The agency I worked at it was a huge problem because back then clients and business people still used AOL and would see the jacked up versions of their site. There was literally nothing you could do, they did it to small and large sites without abandon.
AMP reminds me a bit of that type of setup with AOL re-compressing and crunching down sites through their network. I agree with Google on doing this for email for security but not necessarily websites. AMP to me is quite annoying and in general a bad move.
Reminds me of this:http://blackhat.com/media/bh-usa-97/blackhat-eetimes.html
Currently with AMP Google gets not only your traffic but they get your content on their own domains (which makes all content look like the same trustworthiness) and, at the same time, they mark sites that have AMP available in their search results thusly weighting those results differently because it can train users to click on those more.
Ultimately this is bad for everyone but Google.
However, if it was a framework / set of tools we could create our own AMP pages and simply put them on our own DNS. Google's cache is really the only unique thing going on here and we wouldn't have to worry about sharing trust.
As a developer I'm not a fan. It's another thing to manage and maintain. And the last time I checked once you can't leave without some serious consequences.
As a marketer I like the increased CTR but dislike the higher bounce rate and limited features.
HN: But the open Internet!
Users: What's that?
HN: Normal websites!
Users: Like...the really slow ones? With all the annoying popovers? And pages that take forever to load? And for some reason cause my fancy new phone to slow to a crawl?
HN: Well, those websites should rewrite their entire codebase to be faster.
Users: That doesn't help me, though.
HN: Trust in the free market! The problem is you, the user, who just needs to exert more pressure on website purveyors so they'll make performant web sites.
Users: You mean, like, preferring websites that offer faster experiences? Okay. Continues to use AMP.
If they solved the URL issue somehow(even if faking the address bar), and had original and AMP links in search; it would probably reduce the antiAMP argument quite a bit. Which both seem to be just UI issues.
It's the first time I've found an alternative to google.com that is actually usable (i.e. I find what I'm looking for near the top of the first results page every time I make a search).
You can use Google as one of the results providers, but you won't see any AMP results, and since searx can mix in results from Stack Overflow etc, you might find that a different search engine than Google still gets you good results.
I think Google would pull fewer of these monopolistic tricks if people would realise they have genuine alternatives.
What did happen though, is that i found google results a lot worse on mobile, and ended up not searching for stuff on my mobile. Google results really look like a mess on mobile now...
They really went from minimalist zen to baroque indian arabesque over the year...
// ==UserScript==// @name Un-AMP// @namespace http://tampermonkey.net/// @version 0.1// @description avoids google AMP links and navigates to the original content// @author Alenros// @match https://www.google.co.il/amp/*// @match https://www.google.com/amp/*// @grant none// ==/UserScript==
I don't think I've ever seen an AMP-enabled website, I certainly never noticed any buttons suggesting I visit the original website.
I'd suggest trying an alternative, maybe https://duckduckgo.com.
Is it an american thing, not enabled for other countries? Just what am i supposed to look for?
Do you only see them when doing a Google search?
But given the URL format, it should be trivial for a browser extensions to rewrite links or requests from AMP pages to the original. I bet it already exists.
The ticket was closed a few days ago. People dislike stuff like AMP, but we are probably stuck with it, there just isn't much interest in alternatives.
From google news, the top hits are served through amp and I lose about 1/10 of my screen area to a pointless blue "bar" underneath safari's address bar. This loss of screen space is the only reason I object to amp.
Much faster everywhere, in all browsers and platforms.
to our HTTP requests Google and publishers will start noticing we care.
I sometimes think it would've been better if a few things had visibly failed in January 2000.
Snarky... Except that there were probably years of games to notice that you were approaching a "magic number" like 2^31.
- Erik, CEO, Chess.com
Chess.com is a great site, also lichess.org and chessable.com if you like chess you should check them.
That said, this is definitely indicative of what's going to happen in just 20 years, 6 months and 20 days from now. I mean, we're still cranking out 32bit CPUs in the billions, running more and more devices, and devs still aren't thinking beyond a few years out. I know of code that I wrote 12 years ago still happily cranking away in production, and there may be some I wrote even longer than that out there... and I guarantee I hadn't given two thoughts about the year 2038 problem back then, and I doubt many devs are giving it much thought today.
It's truly going to be chaos.
That was a valuable lesson.
(I actually generated most entries myself while testing stuff - live in prod of course - and while there were probably fewer than 255 votes, the AUTO_INCREMENT did its job and produced an overflow).
Twitter saw it coming and forced the issue. By saying that at a certain date and time they would manually jump the ID numbers rather than wait for it to happen at some unpredictable time.
Thanks for all the comments! Always lots to learn from.
You probably mean 2^31 -1.
Didn't expect Chess.com and YouTube to have a crossover of users? Surprised there isn't active moderation on a site this size.
IMPOSSIBLE to predict.
 - https://www.samba.org/ftp/tridge/misc/french_cafe.txt
I think they're going to be keeping the 2016 version up for a while longer. They generally start a new one in September each year.
It's not like this article teaches much about the general "reversing mindset" (similar to the "hacker mindset", but not quite exactly the same), or the "methodology" as promised in the title. Because yes there is some very interesting overlap in skill within the broad field of RE. Ask any pentester who also picks locks.
Not to discredit the article itself, btw, which is fine given what it actually covers. Which is about Linux binaries, and in particular with the object of solving a crackme puzzle.
Maybe "Reverse engineering a crackme for beginners" would be a bit more descriptive.
There was one thing in particular where I knew there was a jump somewhere (if some_length < some_width) that caused bad outputs. I was playing around looking at registers etc in gdb while following along with a disassembled version of the code, but it was impossible to get any idea where to start.
I wanted something that could give me a few seconds worth of samples of where the instruction register was spending its time as a starting point, but couldn't find any such tool (linux).
Within my control:
- giving input files to explicitly set unique numbers to watch out for - giving inputs that would generate bad output numbers only in the bad code path - giving inputs to force a load of jumps down the bad or good code paths
I honestly wish CMU would release the lectures and full class materials for 15-213 (the course most typically associated with the bomb lab mentioned here). The lectures combined with the accompanying text and labs form a masterpiece, and it's a shame the community at large can't take better advantage of it. It's like SICP for systems : that effing good.
The tests, however, are just awful. Those can safely be dumpstered.
I remember MadWizard's assembly tutorial being very helpful at the time.
My best challenge was Brazil (3ds render engine). It had all types of checks that would only show up when rendering.. But that was no match.. Good times
TLDR; For legacy reasons, some words produce valid colors even if they don't respect the standard color formats. For example, "chucknorris" produces red.
With that said there are some pretty cool ones (e.g. 5afe57 = safest = a green) that do match up. Can't say I can think of many hugely practical uses for this, but it's kinda neat!
is the colour of asafoetida: https://www.google.co.uk/search?q=asafoetida&source=lnms&tbm...
And #C0C0A5 is cocoa.
For fitness studio folks who are into hex (of which there are obviously billions) #F17 (bright pink) would be popular too.
Array.from(document.querySelectorAll(".wrap > div")) .filter(n => n.getAttribute("name").includes("t")) .forEach(n => n.parentNode.removeChild(n));
Bonus points: named color support for valid CSS colors, such as dodgerblue.
One oddity: for some reason, the site's CSS makes text selection highlights invisible. If you select text, the selection looks identical to unselected text, though copy/paste still works.
Also, the color boxes appear to be editable text areas: if you click on one, you can backspace or Ctrl-U and the text of the color vanishes, until you hover/unhover it again and the text gets reset (because of the 1337/LEET translation going on with hover/unhover).
aspell -d en dump master | aspell -l en expand|grep -e
I didn't think of the other possibilities(like #bada55), but instead opted to shorten it to 3 letter codes. The one I like most is #b00, a nice red.
> not found
> closes tab
With the web, the convention right now is to treat the subdomain as a different security origin (with the exception of www). So the link should show c0ffee.surge.sh, not surge.sh.
If this is a manual setting, it probably also needs to be set for neocities.org. I noticed that wordpress.com domains were being subdomained properly.
It really shouldn't be manual, it should just always show the correct origin domain.
1. I believe it began with the hacker getting DOB/SSN.2. Called wireless provider, and hacker forward all calls and texts to a burn phone. Eventually, the hacker ported my wireless phone to another provider/number (not sure which), and the phone registered to my provider did not work anymore. The landline phone was also forwarding calls to another number.*3. Hacker gained access to email (as that email was also within the telco's site). At the beginning, the hacker did not reset the password. After I changed the email's password, hacker was still gaining access to our emails and he/she eventually reset the email blocking my access. (reason was all the text and calls was forwarding to his/her burn phone so he/she can reset the pass anytime)5. Requested 2FA from bank.6. Gained access to bank account.
This was over a course of 3 months. It was a nightmare to resolve and paranoia still remained. The hacker later on went opening several bank accounts. Fortunately, this was discovered early. The entire situation was communicated to the FBI, local police, and bank institutions, but I do not think anyone cared.
*I saw two numbers that were being used within my wireless account site to forward the calls.
After leaving the party with my youngest, I went to the grocery store, and then on home. When I got home my wife was gone, which I expected since she was picking up the older kids from the party.
Throughout this afternoon I had not been checking my phone in an attempt to be a bit less connected on the weekends.
About half an hour later my wife comes home totally freaked out and frazzled.
Apparently after I had left, someone went into a T-Mobile store and somehow convinced the associate that my number was theirs. I had received a couple of texts from T-Mobile with a pin number where the store associate had attempted to do something, but I was not aware of them until later.
Once this person had my number, they called my bank, reset my online password, and transferred all of our money from various accounts into one of my checking accounts. The bank then put a hold on everything (thank god).
My wife happened to have been paying bills online while this was happening, and saw it all go down. Her first thought was to call me, then when I didn't answer to call the mom throwing the birthday party.
Birthday party mom told my wife I had left, so my wife assumed that myself and our 3 year old were being mugged or something. The police were involved and she spent a good amount of time freaking out trying to find me.
All in all I had a pretty good afternoon :P
For real tho, it was a freaking mess. Took weeks to get our accounts safe, and we try to avoid using phone numbers for 2fa now.
1. Do NOT secure your sensitive accounts (facebook, primary email, bank accounts, twitter, etc) with your telco phone #. Telco Phone number is NOT secure!
"Create a brand new Gmail email account. Do not connect it to any of your existing email accounts. (When signing up for a new Gmail, you dont need to enter a phone number or current email, although there are fields for you to do so. Leave them blank.) Once youve created the new island-unto-itself email address, create a new Google Voice number." Use this Google Voice # to secure your primary accounts, and don't have your telco # listed in any of those accounts.
But, make sure your New Gmail account is super secure, with a security key, as mentioned in the article.
2. Check the password recovery methods for all your sensitive accounts and make sure the answers aren't duplicated from any other site. Actually, it's best to remove them, if you can.
If any security experts want to chime in, please do.
My current two banks don't have direct 2FA enabled. As far as I remember, the questions available to one of my banks (credit union) are simple enough that you could probably find out by doing a public info search somewhere, and the other bank (Chase) has SMS 2fa, but outside of that it's just public database questions (I know this because I had my card number stolen recently, I currently don't have access to my phone as I'm out of the country, and they asked me a few different questions from a public database, like if I had ever lived at ABC Dr., do you know this person, and what is the full name, etc.). I'd much rather be able to give the banks some kind of information that they are required to verify before they can access my account, like a verbal passphrase, but I don't think that's possible (as in, I wouldn't be able to access my account over the phone without the passphrase).
As I required to upgrade my Micro SIM to a Nano SIM, I went to one of my provider's shops and asked for a Nano SIM for phone number X. I was then asked to verbally confirm my name and address and that's it. No ID card confirmation, no nothing. "Here you go sir, your new SIM card will be active within a few minutes. Can I help you with anything else?". What. the.
I also find it odd Facebook, and other sites will let you signup solely with a phone number. There's prepaid cell phone providers that recycle phone numbers, etc. Just seems so stupid to rely on a phone number for authentication alone, but two factor I'm okay with since you still need to know the password. Twitter has a developer product where you can be texted a code to login using only a phone number, which to me just seems wrong to do.
It'd be nice if trying to port a number, change important info, etc if they had to actually call you or text you first to confirm. But one of the problems is people will lose their phones, and need a new sim or phone... That I think I'd have a requirement to actually visit the store - but that doesn't work to well with prepaid phone providers without physical stores selling via other stores like Walmart, Target, etc. Maybe in that case without nearby stores, partner with your retailers to verify ID or fax a ID in.
Conversation with one of my banks the other day:
Them: Can we please verify a code sent to your phone number?
Me: Umm, sure, although that won't verify anything. Use something else to verify that it's me.
Them: Can you please verify your phone number?
Me: Umm, I don't know what phone number I used with you? Try XXX-XXX-XXXX, XXX-XXX-XXXX, XXX-XXX-XXXX, XXX-XXX-XXXX, XXX-XXX-XXXX, XXX-XXX-XXXX, and XXX-XXX-XXXX? They all belong to me depending on where I am.
Them: Can we use XXX-XXX-XXXX? Do you have this phone with you right now so we can we send a text message with a verification code?
Me: Send your insecure SMS to any of my numbers. They all go to my e-mail inbox. [I don't need to have my "phone" with me -- my "phones" are virtual.]
I have had two phones die on me that was my 2FA device, plus OS upgrades, so I have gone through resetting 10-20 2FA accounts a few times. Though with upgrades usually I foresaw that and downgraded my 2FA before hand.
All I wish for was that resetting 2FA would be a very very slow step by step process and spammingly broadcasted to all emails, sms, postal etc associated with the account. But I know for cost cutting customer services departments that wont happen.
The problem is that the phone company owns your phone number and you just get access as part of a service. Unlike a domain name where you own it.
If we change the law we'd bring more accountability.
Yes, it's a problem that security questions turn hacking into a simple public records search.
BUT most terms of service have a line like 'you warrant that you've been entirely truthful with us' or something. If you give the wrong security question to your bank, they potentially have grounds to freeze your money or screw you later.
Why isn't the answer 'consumers have the power -- punish services that don't support FIDO by not using them'.
At best this article is saying 'don't connect anything to anything'.
The best way he came up with to secure services that insist on using SMS for 2FA (or credential reset) was to register the number of a pre-paid phone for those services.
Inconvenient? YES. But a pre-paid phone number can not be ported by a negligent (or willfully criminal!) operator.
I have enabled proper 2FA on my Google account with U2F, but I haven't disabled everything else yet because I only have one token, and I still need something like TOTP for stuff that uses Google accounts, but doesn't support U2F.
As a closely related remark, I wish U2F would just get popular enough, it's pretty convenient, isn't vulnerable against the kind of attack SMS-based 2FA is, and protects against phishing. But almost nobody outside Google supports it, and OS/Application support is rather incomplete or requires additional setup.
Seems like some combination of the following:
* using Google Voice for all account recovery situations that require a phone number
* Calling your cell phone provider to have a note that states do not allow for number porting
* Use hardware 2fa tokens. Have two setup, one as a backup in case you lose one.
* Keep a copy of your recovery codes somewhere accessible
* Probably have a safety deposit box with your backup 2fa token and recovery codes stored.
* Primary email provider should use a hardware token and not have sms recovery
* Use unique passwords everywhere and use a password manager
1) Ban SMS as a second factor for high risk targets like banks.
2) Telecom companies should require social security number or uniquely identifying information to provide account access.
On the phone with them, they said the card had been flagged as being used in fraud because we were off in the middle of nowhere, away from our normal spending patterns. The ONLY way to reactivate the card is for the CC company to SMS text us with a code, which we have to read back to them. The thing is, the very reason they flagged us - that we were way off in the middle of nowhere - also meant that we had no cell phone service, and couldn't receive the SMS. And given the vast size of Big Bend (getting out of the park from the hotel is a 45 minute drive), it was questionable if I'd be able to drive to a location with cell service if I couldn't fill my gas tank first.
The hotel manager overheard me arguing on the payphone with the credit card company, and he drew me a map of some pockets of cell service within the park, so in the end I was able to get it taken care of.
One ironic part of this was that the card is in my wife's name. When they wouldn't listen to her, she gave them verbal authorization to talk to me in her stead. They were willing to believe her identity for this, but not for the re-activation of the card, which doesn't make sense.
I also asked their CSR why they flagged the card. They said that I should always notify them if I'm going away. I asked them what the criteria is for that, since this was an in-state trip (I live in Austin, and Big Bend is also in Texas). The CSR said that's odd, and he doesn't know why that would happen.
So good for them that they watch for fraud, but the failure mode for their heuristic is the most catastrophic possible. If the very reason they flag me also prevents me from fixing the problem, then it's a rather badly-designed system.
Users aren't warned enough about the fact that everything fails, and they will have to go through 2FA deactivation/account recovery process sooner or later. They must be really reminded to DO BACK UP the recovery code(s). With "back up" as in "keep not just somewhere, but where you can actually find it, when you'll need it". (But not in your password manager)
This is true for SMS 2FA as well, but completely losing the number (as long as one's a paying customer) must be significantly less common than losing a device.
She just replied well we could change the sim to your name, didn't even check with the original owner and 5 minutes later I was on my way with new sim.
There was no authentication at all. Literally anyone could have walked in gave my name and phone no and would have gained access to my phone. I stopped using my phone for 2FA since then.
In China your phone number is pretty much as valuable as all your password combined, all services are solely linked to it.
Even though phone companies ask for id before issuing a SIM card, I'm pretty sure a tiny bribe is enough to get past most store clerks
Addendum also several of my purchases were flagged as hacked purchases by them and I had to call them three times so far this year. All purchases from same Amazon account, same IP too. So I do not think they have a good services team.
Could you convince a cell phone store rep that you are who you say you are without your drivers license?
Or, for a million bucks, could you make a cell phone store rep think you were someone else?
The answer is why SMS 2fa isn't such a great idea. Because your security checkpoint is owned by a (underpaid) store representative.
If I use a 2FA app like the Google one and lose my phone, I need to have the codes ready. If I were to use my phone number, I kind of don't need that since I just get a new sim and a new phone. But at the same time that is not safe now.
So what is the solution here? I liked the idea of something like DUO but not enough places use it.
Why not just have all sites that require SMS 2FA (there are a lot, including tele co.s) be directed to a personal google voice number? And also remove the any SMS 2FA from this google and your personal? Wouldn't that solve the issue they are suggesting? Why do you need a third account?
Kraken published a highly useful blog post on it. Do give it a read.http://blog.kraken.com/post/153209105847/security-advisory-m...
I wonder what other scams are being incubated in lesser-known parts of the world, that are waiting to be unleashed.
Even so, hackers can still use SS7 to hijack phone numbers.
Even given that, since it relies upon human choice and behavior, and does nothing versus attackers with assets within the phone company, it seems a bad idea to have 2FA via SMS.
Get the 2nd factor
In a single-node database or even a manually-sharded one, this post's advice is good (For Friendfeed, we used a variation of the "Integers Internal, UUIDs External" strategy on sharded mysql: https://backchannel.org/blog/friendfeed-schemaless-mysql).
But in a distributed database like CockroachDB (Disclosure: I'm the co-founder and CTO of Cockroach Labs) or Google Cloud Spanner, it's usually better to get the random scattering of a UUID primary key, because that spreads the workload across all the nodes in the cluster. Sometimes query patterns benefit enough from an ordered PK to overcome this advantage, but usually it's better to use randomly-distributed PKs by default.
For CockroachDB, my general recommendation for schema design would be to use UUIDs as the primary keys of tables that make up the top level of an interleaved table hierarchy, and SERIAL keys for tables that are interleaved into another. (Google's recommendations for Spanner are similar: https://cloud.google.com/spanner/docs/schema-design#choosing...)
This is called a "candidate key" in existing literature. much has been written about such things.
Both UUID's and auto ID's are "surrogate keys" because they are arbitrary with respect to the data.
lastly, "natural keys" are combinations of columns that consist of the business data.
Why does your security rely on primary key obscurity? This seems like you're doing something horribly wrong, put some authentication on that or something.
And no, no they won't. Hitting a collision is very hard if you're using cryptographic strength random UUIDs, you wouldn't even be able to bruteforce 64 bits over the internet in a reasonable timeframe.
Go ahead, try the math on that, the only reason small keys are vulnerable to local attack is because you can perform an enormous number of attempts per second, often in thousands of millions of attempts per second and they can keep at it for as long as they want. The database server won't let you query anywhere near that fast. You will never get anything like that for network based attacks as you're limited by bandwidth, latency and of course, the other side who will notice if you even try to do this for any significant period of time and likely block your attempts or limit them greatly.
I'm tired of hearing "you don't have to say how to get the data, you have to tell the database what you want and it will get that in the most efficient manner" and then deal with an encyclopedia of byzantine rules to get it to do the aforementioned "efficient manner" with anything approaching decent performance. I can see the art, but the practicality mars it beyond recognition. It's like Venus de Milo sculpted out of duct-tape and bubble gum.
Sorry for the rant, I'm just getting frustrated with performance problems in small data sets. I've taken the courses, I've read Date and Darwen, and I'm just starting to get terribly disillusioned.
There's no substance to these claims. Chasing the links around we finallyfind this article:http://www.sqlskills.com/blogs/kimberly/guids-as-primary-key...which makes the reasonable argument that random primary keys can causeperformance robbing fragmentation on clustered indexes.
But Postgres doesn't _have_ clustered indexes, so that article doesn'tapply at all. The other authors appear to have missed this importantpoint.
One could make the argument that the index itself becomming fragmentedcould cause some performance degredation, but I've yet to see anyconvincing evidence that index fragmentation produces any measurableperformance issues (my own experiments have been inconclusive).
This shouldn't be right. UTF-8 encoding uses the same 8 bits for each valid UUID character that Latin-1 would. Unless someone put invalid characters in the UUID field, I would guess that the new encoding was actually UTF-16 or something.
you'll never say this out loud : 7383929. you may be able to remember it, maybe. in a uuid you'll match the last few and first few letters just as fast in your head
uuids are fine. sorting is an issue but at scale (the entire point of this article) how often do you need to sort your entire space of objects by primary key? you'll have another column to sort on
hiding primary keys and having 2 keys seems like a great way to make all queries and debugging 2x as complicated
The moment any db starts to grow to these areas, UUIDs lead to far less issues than incrementing ids everytime.
Most RDBMS now have optimizations and native types (uniqueid) for UUIDs/GUIDs and this is really a moot point at this point, most UUIDs are no longer strings in DBs unless legacy from the time before native UUID types.
UUIDs are right for most projects but not all and as typical in any system, the environment and needs of your project will dictate whether it makes sense to use them.
UUIDs eliminating the round trip and negating dealing with autonumbering/sequencing is a massive benefit, the only real con of UUIDs is the extra 8 bytes but make up for it in less need to lookup during runtime when creating new or associating data with them.
One of our Ops guys did an experiment where they put a uniqueness constraint on the ID column and added an auto-incrementing primary key column that's never exposed to the code driving the thing. It apparently sped up our DB performance by orders of magnitude.
It also turns out that MySQL would perform faster just by leaving those values as strings instead of converting them to binary values. We've got some outside pressure to use Oracle instead of MySQL, and apparently it performs much better than MySQL with our current schema so we apparently aren't going to do anything to improve the MySQL performance or change any of this behaviour.
Let me know if you want ports in any other languages - the the algorithm is to really just treat the UUID as a hexadecimal number (that's actually what it is) and re-encode it into any other alphabet of choice.
That said, always use native UUID types in datastores - they'll convert to bytes / numbers internally and will always be the most efficient. For other situations, remember that they're just numbers, so you can write them in binary, ternary, octal, decimal, hexadecimal, vowels, baseXX or really any other alphabet you want. The bigger your alphabet (as long encoding remains efficient, like ASCII under UTF-8), the better your gains will be.
Notice how the author assumes UUID v4 in the conversation. There are very few reasons to use the other versions but we are still paying for their price in code complexity all the time.
Look at this UUID parsing code: https://github.com/sporkmonger/uuidtools/blob/master/lib/uui...
What it really should be is `[uuid_string.gsub('-', '')].pack('H*')` (for non-rubyists: remove the dashes, decode the hex back to binary).
Their representation is also not that good since hex encoding is not very compact.
I guess what I'm trying to say is that UUIDs are often used as a default unique identifiers but they are actually not that good.
In what context would a primary key change, even when sharding? In my entire career I have yet to see it. Also any sane person would never sort random values. If you need sorting in your table, provide some kind of indexed timestamp.
On top of that you get IDs that are impractical to guess, which while wouldn't replace other security measures, would still give you some collision resistance and probably avoid some bugs because of the unlikeliness of accidentally picking the same key for two different entities.
I'm sure there are pathological cases for UUIDs as primary keys in certain scenarios, like perhaps a very high number of small records, but I've not come across them myself. You obviously have to know your own data and database if you have some very specific requirements.
Many UUID generators produce data that is particularly difficult to index, which can cause performance issues creating indexes. To address this, Datomic includes a semi-sequential UUID generator, Peer.squuid. Squuids are valid UUIDs, but unlike purely random UUIDs, they include both a random component and a time component.
One interesting thing we ran into when implementing is that C#'s binary format and string format must be different to be sequential. So we have to detect whether the GUID is stored as a string or binary and put the timestamp in the correct place to ensure it is actually sequential.
Here's the PR for the feature for anyone interested: https://github.com/PomeloFoundation/Pomelo.EntityFrameworkCo...
This may be practical from a storage standpoint but string-based indexes on an SSD are pretty damned efficient.
Why would you sort these to begin with; what ordering of essentially randomness (part of the point) makes sense?
What about the hi/lo algorithm as a middle ground?
In short, and I hope I don't oversimplify, each "shard" or "cluster" in the database gets a "block" of ids it can then go and assign on their own, the sequential "atomic" increase happens only once per hi "block", lowering the contention.
This gives you nice integers, incremental-ish most of the time.
I like the notion of integers internally and UIID (as integers of course! I would have never saved one as a varchar, I swear! ok, I was a noob... I deserve to be shamed)
Great post all in all!
If the IDs are UUID, then the easiest way to fix the values is to drop the index and re-create it, making all of the other data in the index unavailable as it's being recreated.
The less-easy way with UUIDs is to select just the broken events, create new patched events, delete the old events, and insert the new ones in the right index. But you'd have to branch off of your regular indexing logic to do this, probably writing a separate script. Of course if you make a mistake, you may end up with either duplicate documents or loss of data, compounding the original problem.
So I agree, have IDs that are deterministic (that they can be recreated using some known formula and source data, for example: documenttype_externalid_timestamp).
We have multiple components over different stacks and id could be generated anywhere in the components. We had to live with either building unique id per table separate infrastructure or UUID. UUID works perfectly and with POSTGreSQL, it's just awesome.
I'm dealing with that from several vendors atm.
> Think twicein two cases of very large databases I have inherited at relatively large companies, this was exactly the implementation. Aside from the 9x cost in size, strings dont sort as fast as numbers because they rely on collation rules.
Eh, I've done that before because it made some interaction with Entity Framework easier (don't recall what now). Hasn't really mattered. The space for storing GUIDs has never been a meaningful constraint for anything I've ever worked on (9x is also nuts and assumes that your database uses 4 bytes per character). Sorting UUIDs is also generally uninteresting since they aren't meaningful by themselves. Maybe if you're doing lots of joins you might care about this.
This solves basically all the problems and we use it in production to number several tables with billions of events per day.
UUID-4, UUID-3 and UUID-5 are random (3 and 5 are hashes).
UUID-1 is time-based with the time leading, and you can often control the sequence (14 bits) and nodeid (48 bits) fields to be used as whatever you want to avoid collisions.
If I follow my advice, the type of an ID is an implementation detail of the persistence layer and/or service endpoint.
Is that normal practice? Their DBA was insisting that its normal.
If you are building mobile apps that sync state, UUIDs make your life so much easier. Optimistically perform writes locally, then perform writes remotely and retry on exponential backoff in case of a network error.
Each of the clients reserve a chunk of Lo numbers, and increment the Hi number. Basically, they would pre-allocate a chunk of id ranges, and this allowed good distributed id allocation performance, while somewhat keeping local ordering.
Client generated ids are very useful to do.
1. Store uuids in a uuid field. Why starting the article with such a trivial finding that a text field is not optimal.
2. Use sequential uuids.
3. Several benchmarks have shown that the performace hit is minimal.
4. The only way to communicate with ids is to copy and paste them. Never try to memorize, talk about them or type them.
it's nicer than using UUIDs because the strings are much shorter.
I don't think this is a real problem. If you're relying on your ID's being "unguessable" (and introducing engineering complexity to that end) for security you've already failed.
What do you think about such setup?
Database coder reinvents interned atoms.
(10 million new rows everyday)
Most of the drawbacks discussed don't exist if you're using a key value store.
How does apple not expect that annoying developers with their app store process (so much so that things like this exist: https://fastlane.tools/), AND charging them 30% AND apparently not actually reviewing anything about the apps making it into their store isn't going to eventually drive people away from it?
(Why yes, I am cranky over the amount of hoops I had to jump through to get to the point of asking apple for permission to put my beta on my co-founder's iPhone)
#2 - Average computer/phone users are willfully ignorant. I would say stupid, but that's a judgement call (even though I think it's true). Someone with knowledge can advise them, but they cannot be bothered with all that fuss. They'd rather ignore sound advice and push buttons. After all, look at the who runs the country and the complacence of many of its people.
Have you ever had a friend who was a lawyer? Did you ever get some traffic ticket and think, "Hey, I'll ask Bob if he can help me handle this!"? I'm guilty of this once in a while. But "average users" are guilty of doing this to technical people all the fucking time. And when we advise them of behaviors to change to avoid future incidents, they nod and agree, but then repeat the stupid behavior later.
Sorry for the rant, but perhaps it's time to just start replying to scammed/screwed users with, "Oh wow, that's really unfortunate. I guess you'll have to go buy a new phone/computer." Maybe that will jar them into actually using their brains.
* Edit for wine-related typos.
Also, do people still use the App Store? I don't think I have casually browsed for apps in 5 years or more.
How long will apple allow this? At the very least it should be impossible to bid on trademarked terms, and no ad should ever outrank an exact match result.
I also had another app that was accepted into the app store then when I pushed an update release I was informed that my logo had to change because it used Apple's camera emoji. I only did this because another popular app did the same thing (down for lunch). In order to stay compliant, I had to change my logo.
I'm fine with said rules existing as in theory they are meant to protect lay customers from junk like this. How on earth did this thing make it through a review process that's so hard on some apps?
I wish Apple would apply it's rules and vetting with more consistency.
 Or so I have heard ... from a friend
I've never done it, either. I clearly remember the only few times I clicked on AdSense ads - once by mistake, and was extremely annoyed at the results (it was a sort of list like search results), and 2-3 times to test my own AdSense ads (yeah, against ToS).
Yet AdSense is raking in billions. I've always wondered who actually clicks on the ads :D
How did this app get through that?
I get why people do it, but it's sad that they do.
Never, I guess.
As a long time Android user (and no I wans't happy for most parts; and I wanted to taste the iOS waters both as an user and a mobile dev) who recently moved to an iPhone SE I feel really disappointed.
Nice into the rabbit hole though, should see how bad it gets with VMs.
Little distinction between ads and search results? No filtering or approval for ads? Scammy $100/week subscriptions for nothing? Meanwhile you're not allowed to make fun of the presidents elbows or whatever. Come on.
Which is why the XQuartz/&c. user experience on macOS really really surprised me. It's absolutely unusable. Inkscape for macOS basically may was well not exist as far as my experience with it goes.
Are there other comparable GTK+ apps that work well under macOS or is this a common story?
How in the heck did Canonical squander such an incredible opportunity to be the de facto standard for Ubuntu/FOSS code hosting by letting Launchpad stale so badly?
They freaking built it into their distribution of apt with PPA shortcuts, etc.
They seem to want to differentiate themselves as (e.g. "not photoshop" in gimp's case) but seem to equate that with "ignoring good ui/ux design".
We found it easier to grow and expand all over the world and didn't grow as much in the Bay Area as thought. Currently only 20-30 people of our 550+ live in Bay Area
Also as far as space goes, that is just one photo of the downstairs area of the space. You can see more at https://automattic.com/lounge/ and some early shots here https://customspaces.com/photo/uklO4BLxis/
P.S. I'm the guy in the green shirt in the photo, woo hoo!
Of my past work places--death star cube farms in old silicon valley to tiny rooms in sweltering Berkeley summers to shiny live/work lofts to giant sprawling disneyland like campus to noisy hipster coffee shops--that WordPress office would be up there in terms of a good place to work at.
The real story is the upward trend that if you give an inch, your employees will take a foot. If you offer telecommute, workers will not show up.
I've been freelancing and telecommuting the past five years. I've built my workstyle around chat bubbles, slack channels, video calls, and emails whether 2PM or 2AM.
I've built my lifestyle around that. As in I work around my life. Things just... get done without a direct measure of productivity anymore.
Sitting somewhere from 9 to 5 is like watching TV from the 2000's, ordering Netflix DVDs when we live in the 2010's with streaming Netflix.
And as one disappear, so does another and another. When you look around and realize no one else is there anymore it just becomes a ghost town while the virtual water cooler becomes more and more vibrant.
No ones goes to the office anymore, it's too lonely.
I think the benefits of working remotely are still poorly understood, and long-term the companies that are being built remote-first are going to have a significant engineering advantage over those that bolt remote working on after the fact.
Now spare a thought for those of us sweating in the digital wasteland that is Australia.
Every so often I have to walk over to my fridge and nudge my 4G modem to improve the signal strength. I have a script running 'round the clock to reset the darn thing if the connection drops completely (this somehow it fixes it). I need the 4G connection because the copper wire to my house is so broken it can no longer support an ADSL signal.
Fibre is apparently coming in like... 2019? It is expected to run at a maximum of 25Mbps.
Needless to say, remote work is not exactly on the cards.
I think being remote with an office setup is the best you can get. I can go in at any time I want, and still have the nice environment to work from of.
Being remote doesn't necessarily mean no offices.
I now have a quiet, private space to work, and a nice 5-6 minute bicycle commute :D
It costs a little bit (~$300/mo for the space & utilities - yay for small-town-Ohio pricing), but it's totally worth it.
No the goal is to reduce head count with out laying people off. Companies that go from Remote to Non-Remote do it because it is an easy way to reduce head count with out having to Lay people off, it is a methodology to force people to look for work elsewhere.
People that can not relocate or have built their life around working from home can not or will not make the transition back to working in an office easily. As such they will seek out employment that better fits their needs which is ultimately these companies goal because they want to avoid that "XX Company is laying off X,XXX people in the next quarter" headlines
We either work out of the Airbnb we rent or a cafe. In some cities we were close to a reasonably priced co-working space and would work out of there.
The big draw for me has been the flexibility. We try as hard as possible to do asynchronous work, so some days I will take a few hour break in the middle of the day and go do something, and then work later into the evening.
I can understand occasionally working out of a coffee shop. But who does this all the time and remains productive? And is it really fair to the coffee shop?
Even better would be if this low density land could be incorporated into the huge 667 Folsom office/residential project planned next door. You could build 50,000+ sqft on that large lot and help both the office and housing shoartage. Unfortunately SF's planning process is so slow and uncertain it is probably too late even if the owner and tenants agreed.
Is anything replacing the workplace as the form of community for people or is that something that is just being lost?
Maintaining a 15,000 square feet office in that area for the amount of employees seems oversized in any case.
Of course there are plenty of situations where talking face to face is more informative, but I often find that to be rare.
Communicating via text has the added benefit of documentation and allows you to think about what you are actually writing. I find describing what I plan to do with a client via text helps me organize my thinking.
I work in data analysis though. So maybe this doesn't apply to other fields.
There are countless researches clearly saying that open spaces are bad for productivity yet for some reason they always win. And it's easy to see why, you only have to throw buzzwords like collaboration, team-work, open ... and done.
yes | rm -r large_directory yes | fsck /dev/foo
Someone does a Go version and gets the same speed as GNU yes. Someone else tries several languages. This person got the same speed in luajit, and faster in m4 and php. Ruby and perl about 10% slower, python2 about 10% slower still, and python3 about half that. The code is given for all of these, and subsequent comments improved python3 about 50% from his results, but still not up to python2.
It's about twice as fast as GNU yes now on my FreeBSD system here.
I don't understand this reasoning. Why is it being limited to main memory speed? Surely the yes program, the fragments of the OS being used, and the program reading the data, all fit within the L2 cache?
Not complaining, I like this kind of analysis
But it seems you won't be limited, in a shell script, by the speed you can push y's
Make the static array BUFSIZ * 1024 to trim the syscalls by a factor of 1000.
Why did the GNU developers go to such lengths to optimize the yes program? It's a tiny, simple shell utility that is mostly used for allowing developers to lazily "y" there way through confirm prompts thrown out by other shell scripts.
is this a case of optimization "horniness" (for lack of a better word) taken to its most absurd extreme, or is there some use case where making the yes program very fast is actually important?
`yes` will help me on the "see what happens when something uses all the CPU and memory" test case. Thanks Reddit/HN!
So, what's the distribution of #bytes read for runs of 'yes'? If we know that, is GNU 'yes' really faster than the simpler BSD versions?
Also, assuming this exercise still is somewhat worhtwhile, could startup time be decreased by creating a static buffer with a few thousand copies of "y\n"? What effect does that have on the size of the binary? I suspect it wouldn't get up much given that you can lose dynamic linking information (that may mean having to make a direct syscall, too).
I would write() the buffer each time it gets enlarged, in order to improve startup speed.
Also: The reddit program has a bug if the size of the buffer is not a multiple of the input text size.
And it's increasing the buffer by incrementing one at a time, instead of copying the buffer to itself, reducing the number of loops needed (at cost of slightly more complicated math).
After reading here so many unfair critics and pedantic dislike over PHP, I just want to say: STFU.
... Just to name a few.
It has been "In Review" for a suspiciously long time now. So I think it might be testing the application of these updated policies.
I have often submitted updates to App Review which include the ability to download and install executable code (along with review notes detailing my reasoning) with the knowledge that they would be rejected. I have also appealed Apple's rejections in order to effect a change in policy for the App Store. At some point during phone calls with the reviewers they told me they were "advocating for policy change internally on my behalf" even if they couldn't approve my app right now. I'm so glad policy has changed now.
- The absence of a really good typing story. The 12.9 iPad Pro with smart keyboard is nice for typing text but terrible for moving the cursor around. It's agonizingly slow to do it with keyboard (highlighting is worse, for some reason) and inaccurate to do it with finger/fiddly to do it with Pencil.
The only text editor with vim keybindings (an absolute must in an environment where it's hard to move the cursor normally...) of which I'm aware is Buffer, while the only text editor with both good syntax highlighting and good github integration (via Working Copy) is Textastic. Honestly, I really wish one of those two would just buy the other so that I could have both.
- The absence of a really good ssh story. Prompt is nice, but for some reason, whenever I try to SSH into anything, there's so much latency that it is really painful to actually do anything. Maybe I just have slow network connections? But anyway, so much for just coding on a linode or something in vim.
I think this could really help a lot of students for what it is, and I hope it does well in that regard.
Whatever the provider, I really hate those walled gardens where what you can deliver or not is at the whims of a company whose interest is not always aligned with yours. I understand being on them is necessary due to how large their market are, but this is really not where I hoped we would be fifteen years ago.
I guess I'm merely venting, and daydreaming about what could have been, "if only"...
The day I can run and write Python natively on iOS is the day I buy an iPad Pro. Right now there are some good ssh clients and I can write code from a terminal, but pros of the device are not worth that tradeoff right now IMO.
I like the safety of the iOS walled garden but I also see real value in complex IDEs like IntelliJ running on iPad Pros.
The degree of paternalism is astounding.
This title is somewhat confusing - it makes it sound as though educational apps and dev tools somehow weren't allowed to execute code before, which doesn't make any sense.
Animation CPU Studio will be published soon.
I hope that we can now expect to get this feature, soon.
What is happening to hacker culture? I think as influx of new programmers increase, awareness on the culture's ethos of freedom, liberty, anti-authoritarianism, anti corporatism has to be increased.
Or we will have people loving to be jailed by their benevolent overlords in "apple/google/facebook/etc"
A side remark, people often say how great the US / north America is for entrepreneurs, compared to (continental) Europe where there is a lot of red tape and regulations. But in my opinion, if I were to do this in Germany there is no way ALDI (whom trader Joe's belongs to iirc) could sue me out of business. Not even with the old frivolous "we are wrong but you can't afford the defense" trick. There is just so much legal uncertainty in NA that it would give me nightmares doing business there.
I can say that this does make me upset at Trader Joe's, and I will be considering where else I can spend my money.
They could have worked with this guy, eventually set up a Trader Joe's in Canada, and then offered to let this guy run it. That would have been better for their brand, in my view.
I care about what companies do. Costco hires employees and treats them well. It pays above average, and it hires and keeps on people with disabilities and injuries, even if they can't do everything someone else can do. It makes me feel good to shop there. And it's employees are loyal, hard working, happy and friendly, and they have less pilferage then other stores.
This idea that a company has a duty to be a dick is silly. Companies should care about their brand, and about being a good corporate citizen.
> Defendant Michael Norman Hallatt purchased TraderJoes-branded goods in Washington State, transported themto Canada, and resold them there in a store he designed tomimic a Trader Joes store. Trader Joes sued under theLanham Act and Washington law.
> It is uncontestedthat Defendant Michael Norman Hallatt purchases TraderJoes-branded goods in Washington state, transports them toCanada, and resells them there in a store he designed tomimic a Trader Joes store.
Emphasis mine and it's a big deal. Trader Joe's would have had a hell of a time bringing a suit if it would be called Hallat's Little Shack and would look like any random grocery store.
He should have realized the need, and done things like match their product mix with his own brands, work on making the store's own feel, and dampened direct association to Trader Joe's. He didn't and it bit him in the ass. No sympathy here.
I mean, he did change his store name to Pirate Joes (from the far more ambiguous Transilvania Trading) and his actions seem to betray less charitable motivations than his words would lead you to believe ("This is not a business I should be doing from a personal profitability standpoint - https://www.theguardian.com/world/2014/nov/21/pirate-joes-tr...)
That said, seems like Trader Joe's missed an opportunity for a win-win partnership with someone who had already developed rudimentary logistics to meet a demonstrated demand. But then it doesn't surprise me based on my 30+ years shopping at Trader Joe's: I would never describe them as innovative, instead I'd say they are very focused on what they've been doing well for decades.
From my perspective: every product sold in Canada was purchased in the U.S. so... if anything, this Pirate Joe fellow has provided additional sales for Trader Joes and proved that there is demand for Trader Joe's products in Canada at an incredible 40% markup!
If they're not interested in servicing Canada, would it not be to Trader Joe's advantage to enter a formal franchising or wholesaling agreement with Pirate Joe?
There must be more to this story in terms of Trader Joes objectives as opposed to Pirate Joe's methods or the legal proceedings.
Then again it kind of annoys me that TJ's just didn't open a damn store in Canada. And if they don't want to do that then why not just look the other way while someone else took on the risk of importing their products into another country?
This certainly wasn't a trademark issue. Trader vs Pirate. There was no question this store wasn't run by Trader Joes/Aldi North. They were buying in bulk to stock a store where they couldn't normally get the goods. Reselling should be 100% A-OK. Any trademarks go along with the products. And as far as I would guess, the grocer certainly wasn't tampering with anything - if (s)he was, they'd go out of business quick.
This is just normal SLAPP-style punitive legal actions that a large monied corporation can do to stop the little guy from doing legal behaviors that they don't like.
The only reason anyone's surprised or outrage is that the store feels like a small, homey, good natured place full of organic this and that that's lower priced than you'd expect. That might have been true, 40 years ago. For a store that had the same name, but was a different entity entirely.
Trader Joe's now is just a giant marketing and packaging front for 70 billion dollar a year Aldi, a multinational chain. It's a corporation. None of this behavior surprises me at all.
If the person wants to order 10,000 palettes of cookies at retail price, why wouldn't you sell the cookies to the person? He's not stealing from the back of the store, he's paying full price. I'm very confused why Trader Joe's would not have created a direct connection with the guy.
This reminds me of major services cutting off API access because they thought they could do it better in-house. Just HIRE the person doing your own service better in a different way.
Maybe they see Target Canada failure and are scared away by that?
Why didn't he just create his own store with his own brand and mimic the Trader Joes products and aesthetic? He could buy goods in bulk at much lower prices. He doesn't have to worry (much) about legal issues or spend money on them.
Clearly demand was so high he could still get away with charging very high prices.
The main draw to Trader Joe's is that its part of the journey across the line. This week I'll be doing this same old routine -- pick up some packages at the mail place ($2 per package), hit up a few grocery stores for different hot sauces and staples (including condensed milk in a squeeze tube), have lunch in Bellingham, go for a walk around Fairhaven, then return home.
Trader Joe's is part of that journey, much like Target (who had a massive, depressing attempt to break into Canada). Strip away that special-trip aspect, and all you really have is another grocery store with a few exceptional items.
I'm sorry? Trader Joes, in at least 4 locations I've seen in California, does special signs and displays the week before Burning Man to market to Burners. Where is this writer from?
Right now he would simply stop buying the other products while having his own brand.
If I go to DisneyWorld, purchase a Mickey Mouse doll, an take it home. I have the right to do with that doll whatever I want: burn it, give it to my daughter, or resell it at whatever price I see fit.
However, I don't believe I have to right to go - as an agent of another (presumed competitor), purchase that same doll, and then resell it in my own store. I have no resell agreement with Disney to do so. In a typical reseller arrangement, wouldn't a store (e.g. Target) have an agreement with Disney to purchase bulk product for resell, presumably at a reduced price, but also under strict guidelines as to how it could do so? For example: cannot be sold above a certain price, cannot be sold next to adult content, etc.
On a side note: I have to believe that (while not a TJ problem or related to the lawsuit) there were other issues with what Pirate Joe's was doing related to imports, possible tariffs not being adhered to, etc.
The site in general is a beautiful work of art, a great blend of attention to detail with comedy of computing in that era.
https://en.wikipedia.org/wiki/Lenna - tl;dr is this iconic test picture for computer imaging was a cropped Playboy centerfold from 1972. I've just finished a PhD which included a fair bit of image processing, but I was unaware of the story behind this iconic image.
Title: RPG MO
(Don't leave a space before iframe in the command)
Is this open source? So we could see how it was made?
Accidental "works best in browser X" 90s reference right there.
I find Safari superior to every other browser on any platform in every possible metric except for dev tools, which took a nose dive when they ditched the open source WebKit one for this calamity.
- Half Life 3
- Defrag <3.
- Running Windows93 inside Windows93 inside Windows93 inside Windows93...
A work of art, indeed. Kudos!
now it's crashed and won't reload.
is there a work around for my workflow?
Then I realized everything is written with web technology.
It's also quite buggy (chrome/linux) which adds to the whole Windows 9x feeling. Not sure if intentional but well done anyhow!
Given that kind of zealotry, it irks me that you can launch an infinite amount of nested "Virtual PCs". Obviously it makes for some fun screenshots and is technically impressive in itself, but Windows early on never allowed you to run Virtual PC inside Virtual PC. So this is clearly wrong!
In short, not considering OCD, where do I file the bug-report? :)
I wonder how many hours I could waste looking for more Easter eggs ;]
Otherwise, kudos to the devs for creating this amazing work of art!
I'd like to throw some event handlers on "Puke Data" to allow changes to the dsp graph.
Serious hard work went into this site.
dir is not defined
If there's one lawyer in town, they drive a Chevrolet. If there are two lawyers in town, they both drive Cadillacs.
Basically, there are two approaches the plaintiff might take here. The simplest is to cite the doctrine of equivalents. This is basically the notion that if you do the same thing in the same way for the same purpose, then it's the same process, even though you are using digital instructions instead of logic gates. The legal theory here is pretty well settled. The problem is that you'd need to justify that digital instructions are obviously equivalent to logic gates, and a skilled professional would have equated them at the time of the patent's filing.
The other approach is to argue that an emulator actually is a processor, and therefore fits the literal claims of the patent. The explanation for this is pretty well-established: it's literally the Church-Turing Thesis. However, the viability of this argument depends on the language of the patent claims. Also, it's hard enough to explain the C-T Thesis to CS students. My undergrad had an entire 1-credit-equivalent course that basically just covered this and the decidability problem. Explaining it to a judge, who (while likely highly intelligent) probably has no CS background, over the course of litigation is likely to be really hard.
Now, Intel certainly has enough resources to do both of these things (and they may also have precedent to cite, that didn't exist back then or that wasn't relevant to that case). Don't take this as an opinion on any possible result, it's just information such as I remember it.
- https://en.wikipedia.org/wiki/Doctrine_of_equivalents- https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
They no doubt have been filing additional patents over the years. But I'm sure MS and Qualcomm have plenty of their own patents to bargain with.
Also their warning could backfire if it gives Microsoft one more reason to finally walk away from x86 compatibility... not that this is likely to happen anytime soon.
> AMD made SSE2 a mandatory part of its 64-bit AMD64 extension, which means that virtually every chip that's been sold over the last decade or more will include SSE2 support. [...] That's a problem, because the SSE family is also new enoughthe various SSE extensions were introduced between 1999 and 2007that any patents covering it will still be in force.
AMD64 requires SSE2 which was introduced in 2001, right? So isn't it just 1 year until Microsoft can put in what's required for the AMD64 architecture?
Scorched earth policy will likely not be defensible under fair use law. Reverse engineering for compatibility has a few precedents.
I mean, Apple and Samsung had a billion dollar lawsuit while Samsung chips were still in iPhones. It's certainly precedented to sue a corporation you're actively doing business with.
Intel's strategy of going after other hardware companies may not translate neatly to emulators.
"if WinARM can run Wintel software but still offer lower prices, better battery life, lower weight, or similar, Intel's dominance of the laptop space is no longer assured."
Peter. My man. I laughed. I cried.
For the millionth time, the ARM ISA does not magically confer any sort of performance or efficiency advantage, at least not that matters in the billion+ transistor SoC regime. (I will include some relevant links to ancient articles of mine about magical ARM performance elves later.) ARM processors are more power efficient because they do less work per unit time. Once they're as performant as x86, they'll be operating in roughly the same power envelope. (Spare the Geekbench scores... I can't even. I have ancient published rants about that, too).
Anyway, given that all of this is the case, it is preposterous to imagine that an ARM processor that's running emulated(!!!) x86 code will be at anything but a serious performance/watt disadvantage over a comparable x86 part.
This brings me to another point: Transmeta didn't die because of patents. Transmeta died because "let's run x86 in emulation" is not a long-term business plan, for anybody. It sucks. I have ancient published rants on this topic, too, but the nutshell is that when you run code in emulation, you have to take up a bunch of cache space and bus bandwidth with the translated code, and those two things are extremely important for performance. You just can't be translating code and then stashing it in valuable close-to-the-decoder memory and/or shuffling it around the memory hierarchy without taking a major hit.
So to recap, x86 emulation on ARM is not a threat to Intel's performance/watt proposition -- not even a little teensy bit in any universe where the present laws of physics apply. To think otherwise is to believe untrue and magical things about ISAs.
HOWEVER, x86-on-ARM via emulation could still be a threat to Intel in a world where, despite its disadvantages, it's still Good Enough to be worth doing for systems integrators who would love to stop propping up Intel's fat fat fat margins and jump over to the much cheaper (i.e. non-monopoly) ARM world. Microsoft, Apple, and pretty much anybody who's sick of paying Intel's markup on CPUs (by which I mean, they'd rather charge the same price and pocket that money themselves) would like to be able to say sayonara to x86.
The ARM smart device world looks mighty good, because there are a bunch of places where you can buy ARM parts, and prices (and ARM vendor margins) are low. It's paradise compared to x86 land, from a unit cost perspective.
Finally, I'll end on a political note. It has been an eternity since there was a real anti-trust action taken against a major industry. Look at the amount of consolidation across various industries that has gone totally uncontested in the past 20 years. In our present political environment, an anti-trust action over x86 lock-in just isn't a realistic possibility, no matter how egregious the situation gets.
So Intel is very much in a position to fight as dirty as they need to in order to prevent systems integrators from moving to ARM and using emulation as a bridge. I read this blog post of theirs in that light -- they're putting everyone on notice that the old days of antitrust fears are long gone (for airlines, pharma, telecom... everybody, really), so they're going to move to protect their business accordingly.
Edit: forgot the links. In previous comments on exactly this issue I've included multiple, but here's a good one and I'll leave it at that: https://arstechnica.com/business/2011/02/nvidia-30-and-the-r...
QEMU emulates X86 chips as does other emulators. I wonder how those are effected?
I think this theory of infringement has to run into various thought-experiment problems such as : can I auto-translate that binary into some other instruction set, then execute the translated binary, without infringing Intel patents? (yes, surely) Is the translator now infringing Intel patents because it has to understand their ISA? (no, surely).
Now, can I incorporate that translator into my OS such that it can now execute i386 binaries by translating them to my new instruction set which I can execute either directly or by emulation? If so then I am now not infringing. Or did infringement suddenly manifest because I combined two non-infringing things (translator + emulator for my own translated ISA)?
Okay, got it. I'll make sure to account for that in my next CPU/device purchase.
It's quite possible I'm missing something vital here, of course.
And unless Qualcomm and Microsoft are working on a Hardware assisteed X86 emulation, this warning shot may be directed at somebody else.
My guess: Apple.
AMD licenses x86 patents to Qualcomm/MS to make x86 emulator better patent troll proof. In return, Qualcomm and AMD team up for better ARM server based processors. MS can sell more Windows/Windows Sever (sad).
I would love to see Dell, Lenovo and HP to switch exclusivly to Ryzen processors,
And switch to the new Naples CPU in all their Server/Storage systems
That's because it doesn't factorize the input into separate meaningful parts. The next step in LSTMs will be to operate over relational graphs so they only have to learn function and not structure at the same time. That way they will be able to generalize more between different situations and be much more useful.
Graphs can be represented as adjacency matrices and data as vectors. By multiplying vector with matrix, you can do graph computation. Recurring graph computations are a lot like LSTMs. That's why I think LSTMs are going to become more invariant to permutation and object composition in the future, by using graph data representation instead of flat euclidean vectors, and typed data instead of untyped data. So they are going to become strongly typed, graph RNNs. With such toys we can do visual and text based reasoning, and physical simulation.
Instead of handwaving about "forgetting", it is IMO better to understand the problem of vanishing gradients and how can forget gates actually help with them.
And Jrgen Schmidhuber, the inventor of LSTM, is a co-author of the RHN paper.
It's well understood that CFGs can not be induced from examples. Which accounts for the fact that LSTMs cannot learn "counting" in this manner, nor indeed can any other learning method that learns from examples.
 "Strings generated from"
 The same goes for any formal grammars other than finite ones, as in simpler than regular.
Is anyone working with LSTMs in a production setting? Any tips on what are the biggest challenges?
Jeremy Howard said in fast.ai course that in the applied setting, simpler GRUs work much better and has replaced LSTMs. Comments about this?
You would think an article like this would define LSTM somewhere.
Completely with you on the 'sucking up and sending a kind helpful response'. Snark does not pay. It makes no sense to snap at a user.
Regarding the other point about first-time users. I have a slightly related theory.
When you design something, design it for someone who has the attention span of a two-year-old. Not because your app is going to be used by a two-year-old. But because that is how much mental bandwidth a user is going to give you. Your user is probably busy or just likes to multi-task.
Working that much harder on the UI pays off, or at least prevents a disaster.
Taking a literal step away tends to help. I've often realized new approaches or epiphanies when mulling a problem while walking or in the subway.
In this app's case, it's about re-imagining an existing function - timetables. The designer knows that user experience is everything, and because of this he's willing to scrap everything if need to. And even when this happens, it isn't exactly waste as you understand the problem deeper and come to the design of an even better solution.
Sure you can argue that an MVP can bring about those design iterations. Keeps your focus on the users too. But arguably the market for this type of product is very active - though not necessarily competitive. So rather than get buried with the hundred others, it needs to shine right from the beginning.
This is generally true, but it seems a bit like applying an Enterprise view of sales to a market of minnow sized budgets. It reinforces app consumers' view that apps should only charge for marginal value, not core value or the biggest value. This sort of "freemium" model leads to basically a market of pure crap with extremely rare gems.
Edit: I'm not dumping on the author, here. Were I to "do mobile" I'd probably take a similar approach because it clearly works.
I'm reality most of us don't really have more than a few shots except in those rare cases of the most trivial apps.
The really scary part was after dialing the number and encountering the operator, we were unable to hang up (any time we hung up and picked back up, the operator was still there, even after waiting about two minutes). Fortunately this was (a) at MIT which still had a central electromechanical telephone switch for student phone lines in the '80s and (b) I had keys to the switch as a student phone repair tech.
I still remember grabbing my keys, running over to the switch, and physically pulling the relay contacts to release the call and prevent a trace to our location in case that was the motivation for holding the line (nowadays traces are digital and instantaneous, but when looking at old-school electromechanical switches you really did need time to trace the call physically through the relays).
Yes, we were aware the operator was probably just messing with us by showing he could hold our line against our will to discourage us from calling again, but it still scared the crap out of us just in case.
There is also another service called WPS for cell phones where you get priority just by prefixing your number with *272, the only catch there is your specific phone needs to be enrolled.
But yeah - it's all the luck of the draw. Some phone people have had varying levels of luck with other things involving that area code as well: http://www.binrev.com/forums/index.php?/topic/48478-weird-71...
For example, this PDF explains a lot than anything present on HN or Wikipedia:http://chicagofirstdocs.org/resources/060912-GETS.pdf
Here's a doc that covers all US Federal emergency communications:https://www.dhs.gov/sites/default/files/publications/nifog-v...
I originally discovered this guy from HN and the audio recordings on that site are mesmerizing to me.
If a number was not an active customer it was put in an outbound call list to solicit long distance.
The best story i remember was when the navy wanted to know why we called one of their nuclear submarines. This implied that the right 10 random digits contacted a sub.
I suspect it got killed off because so many businesses were switching to cheapo, poorly-made, Winmodem-based PBXes that didn't recognize the area code.
808-248-0002 - "Your GETS call is being processed. Please hold."
I feel bad for that operator
Sounds like a major security problem, and during a crisis is especially when I would not like to have a buffer overrun.
I'm in my 40s. Incredibly old for HN standards. And yet, I feel no nostalgia for the "good ol' times." I mean, don't get me wrong I'm sure there's a lot of things that set me apart from newer generations -- I don't get Snapchat at all ;) -- but I don't see me being happier by being put in a house set up to look and feel like the 90s/80s.
Is it maybe because we as programmers tend to be less prone to be stuck to the past? Just wondering
How we feel and what we think of ourselves affects our levels of Testosterone, Cortisol, Serotonin, etc. Even a 5 minute conversation can give you a T boost of 30%+ ... or believing that you're perceived as high status alters your Serotonin. Those hormones in turn make you more vital.
So who knows what was the reason... maybe more social interaction with strangers? Or simply putting their mind into a different, better place?
There were lots of things I could do in my 20s (e.g. refuse to use gasoline-powered city transportation, refuse to patronize places that used disposable cutlery, refuse to use non-free software, etc.) that I can't do when I'm in my 30s because people around me would think I'm a stubborn idiot, jeopardizing my career at a point where I have not yet established myself. It's very easy to tell a colleague, advisor, anyone at school that you're going to bike to the destination or take electric-powered transit [because you don't believe in a fossil fuel future]. It's very difficult to say the same thing to an investor, co-founder, employee, customer, or whoever is offering you a ride in their car, without feeling like an ass. I'm basically forced to be "normal" during work times and fit into the mould of society. I can only be myself on evenings and weekends.
I can only imagine how much more "being normal" I need to do if I had kids, pets, tenants, or whatever. I don't have any of those at the moment. The other night I was pondering over potential improvements to our music and mathematical notation systems while staring at the Milky Way. (I didn't come to anything conclusive, but I love thinking outside the boxes that society defines for us.)
10 years ago, I could truly be myself 24 hours a day. I was basically learning all kinds of things about the world by doing that. Now, I only get about 5 hours a day to be myself. The rest of the time, I need to conform. The lack of "me" time itself may be contribute to some degree of mental rot/aging, apart from the biological component.
Which is to say, I'm dubious as hell of this result: For something this click-baity, at this point in the history of psychology research, I'mma need some serious replication before I give itan ounce of belief.
I think the computing party is just getting started. Non-trivial domestic AI will be here within a couple of years, personal robotics 5-10 years after that.
The current ad mania sucks, but it's going to have to evolve or die.
I don't miss much of the past. Pocket phone computers, tablets, GPS, video calling, massive data storage, and the potential of renewables and distributed energy grids are all awesome. Like.
Even social has its moments.
The real problems are cultural and political. There's been some movement there, but not nearly enough. The system has nearly enough energy to go through a phase change soon, and that's when things will get really interesting.
Moi? The body and mind are both subject to: Use it or lose it. We also, as humans, tend to assimulate into the norm around us, be it smoking, obesity, and now I guess perhaps youth.
Finally, I have to wonder about the effects of essentially being on holiday. In addition, perhaps the group discussions energized them? That is instead of waiting to die, they had more reason to live? In any case, interesting.
But ultimately the end is the same. You can't reliably exercise your way to 90, even. The majority of people who are exceptionally fit die before reaching that milepost in the environment of the last 90 years of medical technology. The future of health and longevity in later life will be increasingly determined by medical technology, and nothing else. Aging is damage, and that damage can be repaired given suitable biotechnologies to do so.
DNA methylation patterns correlating strongly with age are a very promising tool when it comes to assessing treatments for the processes of aging. Companies offer various implementations now - see Osiris Green for a cheaper example, to pick one. In the SENS view of aging as accumulated molecular damage, epigenetic changes are a reaction to that damage; a secondary or later process in aging. We'll find out over the next few years how the rejuvenation therapy of senescent cell clearance does against this measure, now that things are moving along there.
But you shouldn't think it impossible to construct useful metrics of biological age more simply. There are a number of excellent papers from the past few years in which researchers assemble weighted algorithms using bloodwork, grip strength, and other simple tests as a basis into something that nears the level of discrimination of the epigenetic clock.
When it comes to a biomarker of aging, there are lots of promising candidates. Researchers will spend a lot of time arguing before they come to any sort of pseudo-standard for that task. Industry (today meaning the companies developing senolytic therapies for the clinic) will overtake them and, I'd wager, adopt one of the epigenetic clocks because it basically works well enough to get along with, and can be cheap in some forms.
'aging' is the word you are searching for
Note that this change depends on the shared PID namespace support, which a larger, still-ongoing endeavour .
Coming into the k8s ecosystem with very little container experience has been a steep learning curve, and simple, concrete suggestions like this go a LONG way to leveling it out.
Feel free to take them for a spin and feedback welcome and appreciated.
Right now, we have a bunch of microservices. Most of them talk to our shared infrastructure. We started with single configuration file, which has grown to monstrous proportions, and is mounted on every pod as a config map.
What would be the correct approach? Multiple configmaps with redundant information are just as bad, if not worse.
Edit: oh, you kind of do. Well, it's not upcoming any more, it's in the latest Docker CE :)
If some of you are interested in Kubernetes GPU cluster for deep learning, this article might be good to read as well.https://news.ycombinator.com/item?id=14526807
The k8s blog has some as well:http://blog.kubernetes.io/2016/06/container-design-patterns....
i've seen this pattern before and it didnt make me feel very good. it reeks of unnecessary complexity.
Kind of.. but you can set `restartPolicy: Always` and will always restart in case of failure.
This was brought up in a thread last week about "As a female, how do I identify a good employer?"
The best answers basically said "work somewhere that has as boring a corporate culture as possible". Basically, work for a place where you are rated on your production and nothing else. Work elsewhere and things like "how late did you work?" -- a metric that is far easier for people without children to meet -- cease to matter.
Working late isn't the only thing (though it is a big one), but it tends to correlate with "immature HR practices" in general. Inclusivity is about recognizing that people have life configurations that differ from your own, and creating the space for those differences to exist.
I lost count of how many times, something innocent like not going to lunch with the team regularly (I'm a picky eater), or participating in whatever game the team was nuts about (foosball, or various exotic board games) turned into personnel issues where all of a sudden I was "unavailable to the team", or "distant and aloof" etc, even though my professional contributions were just fine or even stellar.
You can imagine how stressful it is to show up to work everyday wondering what bullshit non-work related nonsense is going to come up that day and require another stupid chat with your manager. And in the midst of that you're expected to keep up a cheerful demeanor and work well with the same assholes that keep bringing up this irrelevant crap because the fault in these interactions couldn't possibly be with them.
The day it becomes about the work, and not personal discomfort with new and differing points of view about communication and interaction, diversity at tech companies will become an after thought ... in a good way.
Yes! Yes! Yes!
I don't drink, and its kind of sad that I get to miss out sometimes because I don't go to the bar. Because I like to bike instead, why can't I not feel pressure to go to the bar and do my own thing after work?
Handling work stuff at Work I feel is the way to go.
Read this three times and I can't understand what it means. Can anyone "translate?"
On the overall topic, it seems really obvious in retrospect that removing formalities in the workplace turns the office into a social club and those who don't want to socialize are excluded. Its certainly an unintended consequence though.
I can certainly see the benefits of formality in the office now that I'm older.
Staunchly meritocratic online interaction and collaboration, from software development to messageboards, allows people to cultivate identities largely defined by their contributions, which is often distinct, or even at odds, from the identity they wish to demonstrate in their real life. In online spaces where individual contributors aren't restricted from speaking out against the leadership, this disconnect will manifest instead of being suppressed.
While I don't disagree with the author's recommendations and rationale, it's unfortunate that the OP's argument essentially reduces down to the fact that the less casual interaction between people, the more inclusivity will result. It's also re-framing the implied problem: the equality vs. equity debate. In the OP's view, the solution is to cultivate a minimalist, work-focused culture that solves the inclusivity question by avoiding it entirely. This is very much at odds with the approach that receives a lot more press these days, which seeks to prescriptively address inclusivity within its own problem-space.
The difference between a startup culture and a corporate culture is the difference between a creative company and a disciplined company. "Discipline" is like a swiss knife, something that can work anywhere and everywhere. Creativity only works in some places, in places that are desperate, in places that are still making basic decisions, in places where the problems are high and the solutions are few.
A disciplined company has no problem being acquired by a creative company. But a creative company has many problems when they start masquerading in a disciplined company. (Read: Microsoft acquires Company X and writes it off 5 years later.)
Working in a disciplined company is easy for most people. No manual required. Working in a creative company is difficult for most people but easy for creative people. Most foreigners or people with diverse minority backgrounds have a difficult time adapting to very social environments. They would rather stay strictly professional and confined to their work.
But here is the problem: what is the point of having diversity if social interaction is nil? How messed up is your social world if it does not include unsocial minorities?
There is a balance that is needed. Google started as creative and became more corporate and also became more "boring". (Sergey Brin's word)
If you only work a minimum number of hours within your field, you are unlikely to emerge as one of the peak achievers or thought leaders in your field. That's just because you learn more from experience, and working more hours gives you more experience.
You can extrapolate from there what this means for companies and individuals.
I am not at all saying that companies should ask people to work long hours. (I run a software company, and we are super-lax about hours, people showing up at the office, etc). But I am saying that if an individual wants to be an expert in a particular field, that person should probably work a lot (and probably wants to work a lot anyway, due to interest in the subject). This doesn't necessarily have to be at the company; it could be at home, on personal projects, whatever. But the deeper and more challenging the project is, the better you learn, and it's easier to have one project that is deep and challenging than somehow to have two in parallel. And if only one is deep and challenging, then you are sort of idling with half your time. So there are basically two paths to this kind of deep work: work for a company, make sure you get a project that's really good, and then work hard on it; or go do your own thing, make sure you have enough money somehow, and work hard on what interests you.
This also means that "work-life balance" is not a thing for experts the way it is for normal people. But that's fine, because for these kinds of experts their work is a serious part of their life and the two things are inseparable.
Of course if you don't feel this way about what you're working on, that it is a serious part of your life, then this strategy doesn't make sense; and I would not encourage people who don't feel this way (who are the majority of the population) to work that hard. I am just pointing out that there are some of us for whom a different life strategy is best.
> But above all I didnt have the cultural and social capital to know how to dress casual in the right way. My casual dressing was made of nerdy, unfashionable and cheap clothes: you could immediately say that I havent accomplished anything. And I didnt even know that there was a rich way to dress casual.
theres more art to looking sharp in casual attire than in a suit and tie!
> Tyler Cowen: Well, being a casual person myself, I'm very glad being casual is in vogue, and probably will stay in vogue. But what I find striking is societies with a lot of upward mobility often tend to have strict dress codes. So you see this today with Mormons, at Mormon businesses. You see it in Japan in its heyday years--you know, the businessman or journeyman suit, they more or less all looked the same. There's something about upward mobility where actually clothing is not that casual and one is being more formal in trying to impress; and that is a [?]. But the thing about being casual is it actually makes it harder for people to prove themselves. So, Bill Gates goes to a meeting and he may show up dressed very casually; but he's still Bill Gates--either everyone knows or if you really needed to, you could Google him. So there's a code of casual that's actually very difficult for, say, people from other cultures in America to master or demonstrate that's actually made signaling harder. Just that right way of looking casual is in a funny way more conformist than like the blue suit and tie, which you could do and then innovate around and try to climb to the top. So I find this disturbing, the more I think about it.
> But maybe that comes at a cost. If we set aside that desire and focus on what were really trying to do heremake good softwarethen maybe well open up some different possibilities. By constraining the number of things we have to agree on, and the number of hours we have to spend agreeing on them, we naturally open ourselves to a diverse world of talented people.
Much as we might wish otherwise, I think this article is right that informality and diversity are in tension (though I think it's massively wrong to conflate informality with long hours; it's very much possible to have a culture where you drink alcohol, play Rock Band, play board games, but still go home after your 35 hours/week). But having to give up informality would be a very heavy price. For me a comfortable life is the end and making good software is the means. But even if your goal is good software, looking at the past couple of decades of big professional companies being displaced by scrappy startups, informal organizations seem a lot better at producing good software.
Sometimes, to evolve, adapt and gain the edge, you have to be loose and unprofessional.
Sometimes, to survive a famine or a drought, you have to ruthlessly cut what isn't absolutely necessary.
These are the other phases of the business cycle that the author neglects. Professionalism, openness, and work life balance belong to a certain phase of the cycle. That phase does not come from nothing and it does not last forever.
Yesterday, I had a wide-ranging Slack conversation with some very nice people who patiently allowed this privileged white male to repeatedly touch the third rail of diversity and inclusion. That conversation led me to the realizations in this post. Ill thank them by not naming them, and by promising never to bring this up in their Slack channel again.
In other words, people openly hated on him for wanting to discuss something with them and get informed -- a white male in a position of power that few women or poc occupy. And all they can do is make him scared to bring it up again and act like the abuse they heaped upon him is some sort of privilege he didn't deserve or something.
I am so sick of women and people of color being openly hateful to people who were born the "wrong" gender and color to be part of the unfortunate many. Hello? Whining about how "it isn't my job to explain this stuff to you!" instead of being all "OMG! An opportunity to have a useful conversation with a white male who is actually curious about how the so-called other half live!" is part of the problem, not part of the solution.
(Before you auto-downvote this on the assumption that I am some overprivileged asshole man, please note I am a woman.)
That some companies with great effort manage to compete over the few female developers on the market doesn't prove every company could hire lots of female developers if only they changed their culture.
To be honest, personally, even if there were those hundreds of thousands of female developers supposedly driven away by bro culture, I would still maintain that people should have a right to create companies they enjoy working in. If some people want to work in T-Shirts and get drunk every night, it is their right to do so (if they can earn the money to sustain it).
Luckily not all companies are the same, so that people can apply to companies that suit their tastes.
If it weren't so, there wouldn't even be a need for hiring or job seeking to begin with. People could just apply to the next best company and be hired, likewise, companies could hire the next best applicant - because there would be no such issues as cultural fit or whatever. Not very realistic (source: I am not friends with everybody and not everybody is friends with me).
It is a problem that corporate America tries to optimize function by getting everyone closer and closer together with team building exercises and alignment of values.
Values are deeply personal and we should recognize that people are going to differ. Freedom of conscience is as basic as freedom of religion and important for the same reason.
If we keep work a professional space we maximize diversity of thought and life experience, which are ostensibly what the large push toward ethnic and gender diversity are a proxy for.
The reason Fog Creek works well, is because it's very smart people, who care about what they're doing. They care because it's a product company - they get to make decisions that impact the product. They feel a sense of ownership.
Contrast that with a sweat, uh sorry, I mean dev shop. Contrast that with doing contract work for big companies where you come in, leave 6 months later. Contrast that with start-ups that only exist because someone got free money.
Contrast that with shit maintenance work at big corps.
Does that about cover 95%, if not more, software jobs out there?
There is no fixing shit workplaces because the foundation is rotten. When you have no say, when you don't care about the product, when you move around every few years - yeah, it's shit culture.
There is no fixing that - most people long for a stable group of people they can make something happen with.
Most people are confused about how much work and dedication it takes to make something great. Most people's actions create what most people complain about and they don't even know it. There is no fixing it, there is only becoming good enough to either start your own Fog Creek, or be good enough to join one.
I don't see a problem with having companies with corporate culture and companies with startup culture side by side, just because I dislike the suite and tie culture doesn't mean I want it gone, however from reading the article I get the impression that the author wants the more liberal companies gone just because he doesn't like them.
Hmmmm... I agree at least in principle that one shouldn't be required to always hangout late nights after work. However, admittedly occasionally I think it's useful and it's helpful to understand your co-worker's motivations and spending a bit more time with co-workers sometimes is certainly reasonable and helps to build trust and respect amongst other co-workers.
Maybe people should be working only 30 hours a week and spending the other 10 hours just on team building.
I also think its useful to understand the social aspect of things because understanding motivations can help the team solve problems in a way that everyone will agree to.
I bet there's more than one out there.
Work/life balance doesn't "work" for a lot of people, a lot of types of work and a lot of lives. Astronauts, Presidents, Prophets . . . startup ceo's . . .
Like the overnight train that left me in an empty field some distance from the settlement, the process of economic development has for the most part bypassed the two hundred or so families that make up the village of Palanpur. They have remained poor, even by Indian standards: less than a third of the adults are literate, and most have endured the loss of a child to malnutrition or to illnesses that are long forgotten in other parts of the world. But for the occasional wristwatch, bicycle, or irrigation pump, Palanpur appears to be a timeless backwater, untouched by Indias cutting edge software industry and booming agricultural regions. Seeking to understand why, I approached a sharecropper and his three daughters weeding a small plot. The conversation eventually turned to the fact that Palanpur farmers sow their winter crops several weeks after the date at which yields would be maximized. The farmers do not doubt that earlier planting would give them larger harvests, but no one the farmer explained, is willing to be the first to plant, as the seeds on any lone plot would be quickly eaten by birds. I asked if a large group of farmers, perhaps relatives, had ever agreed to sow earlier, all planting on the same day to minimize losses. If we knew how to do that, he said, looking up from his hoe at me, we would not be poor.
1. how they didn't have pest problems if they planted in fractal patterns
2. but they did have pest problems if they didn't plant at the same time
Could someone kindly explain that in a little more depth?
This is a very important point to remember about subterranean tunnel systems. It is exactly what came to my mind when I watched the Boring Company video about a huge network of 3D tunnels. The tech press, which had probably never even covered a construction project yet alone tunnels, was basically like "what about earthquakes"? But tunnel collapse is not the primary safety issue.
It's fires. Smoke and toxic gases from fires spread very quickly through tunnels.
I am a huge fan of the concept, by the way, but I want to emphasize that most fatalities from traffic tunnels have been from fires (apart from ordinary traffic accidents). And Elon Musk has stated that what makes this vision feasible from a cost perspective is smaller tunnel diameters. Which makes air "communication" all the more accelerated and safety critical. Thus any vision of tunneling without detailing fire safety, evacuation systems and firefighter access is significantly incomplete, as these can add significant cost and fundamentally constrain designs.
Michael Punke, the author of the Revenant, wrote an excellent non-fiction book about it called "Fire and Brimstone: The North Butte Mining Disaster of 1917".
If you search for "stench gas" you will find some... interesting photos of the control panel for these systems:
(Something about buttons marked "release stench gas" and "release anti-stench gas" seem amusing in a rather comical fashion.)
Unlike carbon monoxide this compound had a very "this will kill you" smell to it.
Right now the smells are simple signals; I'm curious if a scent could be engineered to contain a language. Like paper and writing.
I'm asking for my story, in which sapient rats struggle against two-legged monsters with opposable thumbs.
Had it go off on a couple sites I've been on, and it's remarkable how it reaches every corner of the mine.
And if you want to buy some: http://www.zacon.ca/stench-gas.asp
They use wasabi.
Found it... https://youtu.be/kH5JhYsfNMA?t=1m10s
Nowadays my sister suffers claustrophobia & I feel most comfortable shoved into small crooks. But I think I was mostly offset by anxiety related to ventilation
Visited Timmins this May, another northern Ontario city. It was snowing
These big companies turn regular people into corporate livestock to serve the wealthy.
If you were to analyze Facebook as if it were a country, the wealth gap among employees would be atrocious - The top 1% would own maybe 99% of the wealth of the country and everyone else would earn a minuscule fraction of the total value that they produced.
If we let monopolies take over, then the economy of the world will start to mirror the economies within these large corporations.
What's worse is that the social aspects will also be mirrored. We will gradually lose freedom of speech, in the same way that employees of large corporations don't have the freedom to say what they really think to their bosses.
Many who have worked for a big corporation will know how suppressing the environment can be. I'm really glad that I live in a time when there are still alternatives.
I'm not a fan of big government, but at some point depending solely on their goodwill seems dangerous.
Of the tech giants, Apple, Amazon, and Microsoft seem to be in the best shape at the moment. They make their money from selling real products and services to real end users.
Amazon is strong but not unbeatable. Walmart in particular, as well as a hypothetical alliance/merger of supermarket chains, are well positioned to break Amazon's dominance of e-commerce. But they will need the ambition and ruthlessness that has served Bezos so well. Few large American corporations still have the vigor and virility of Bezos's Amazon.
Microsoft (full disclosure: my former employer) too is strong but not unbeatable. All of their products are facing tough competition from Apple (in OS and hardware sales), Google (in online services), and multiple others (in business software). Microsoft's wins are hard fought and fair, and the competition never lags too much.
Re Apple: Technology comes and goes, but the iphone was a good one. The troubling thing for me is the amount of cash they are hording. If this is intended to underwrite Applepay as a new bank, then firstly they must allow open access to all technology platforms to use Applepay. You can't have a new dollar that works better at Walmart than Tesco.
Re Google and Facebook: These companies are advertisers. I have no problem with their size or structure at present. I do have a problem with using their easy cross border presence to avoid taxation. You cannot have the biggest revenue generators for advertising paying no tax, when everyone else has to. In the UK Google is headquartered in Ireland so the UK receives no tax from the billions of revenue. This is unsustainable for the country.
My real problems with Google and Facebook is they regard everything about us as their to do what they want with.
Monopolies are very easy to form on the internet, and in the interest of improving everyone's use, we need to try to avoid them. Walled gardens currently trap people into one service and limit the ability to swap between them, similar to "forcing" you to use just one company for your construction work, no matter the price.
I haven't heard a great answer to this problem yet, if one even exists. Is there a way to attempt to prevent these from forming which can be practically implemented?
I'm not sure what the work around is, but for sure we (Americans) do not apply anti-trust / competition law as aggressively as we should. It just seems wrong to me to allow these big companies to buy up smaller companies and then just obliterate them into nothing, not use their technologies, just destroy them by putting the patents into a vault and sue anyone who infringes but not letting anyone benefit either. They all do this to varying degrees.
It's like at a certain market value as a percentage of either global or maybe national wealth, companies should be disallowed from mergers of any kind. And at another level of size, they are required to break themselves up into pieces.
Edit: Maybe disallow hoarding patents. If after X years you're not using a patent, it either auto-expires and is relegated to public domain, or it's compulsory to sell it. Use it or lose it!
On the other hand, you're locked in with your broadband ISP. If I'm going to decry some internet business, I will first point the finger at Comcast and their ilk.
Perhaps they are "too big", but only because they isn't a revolving door between tech companies and the corridors of power (yet?)
And yet it's allowed that like 80% smartphones sold in Europe come with Android versions that basically lock the user into the Google ecosystem, else you can't use the default store.
Is there an obvious flaw to this approach? Why don't I see anyone ever suggesting it?
Proprietary technology has a monopolizing effect in capitalism.
Since technology by definition has an exponential growth rate of efficiency, the monopolizing effect grows with it.
There are of course real life examples from the past.. 
On the other, being so reliant on one company for so much is bound to cause problems at some point.
Regulations are a double edged sword and it's often used to stifle new business and crush possible competition.
It argues that every information networks in the history- telegraph, telephone, radio, cable - follow the pattern of consolidation and disintegration. The new inventions always had the chance to disrupt the old industry, but our modern network - the Internet - might be an exception. Because the Internet is the master switch of all things digitized.
what is the point of discussion, if we nothing concrete, other than rhetoric can be done?
I don't understand why Microsoft gets fined, but Google gets a pass when they used their other products to push Chrome to a dominant position.
The rear peep sight on rifles take advantage of actual "optical effects", without any glass -- much like a pinhole camera can actually magnify images without any lenses or mirrors at all.
By simply providing an arbitrarily small "aperature" you're looking through in the rear, the front-rear sight alignment problem is not only capped at an upper bound of error (defined by the peephole size and sight radius), but the actual error from front-rear sight misalignment is visually magnified and centered through a fixed viewing point, making it vastly easier to keep the actual error near zero.
So generally, to achieve precision within the (small) upper bound of error with a peephole sight, all you need to do is place the front sight post on the target when looking through the rear peep sight. Even better precision is made much easier via a sort of "peephole camera" effect through the aperature of the rear sight.
I work from home and I live in the burbs so pistol or rifle shooting is not possible. However, I've gotten really hooked on shooting (of all things) my Red Ryder BB gun. It doesn't make a loud noise, it costs almost nothing to shoot, and it's surprisingly accurate for how inexpensive it is. These little BB guns have iron sights like the article discusses.
My favorite thing to shoot is little plastic bottles--particularly the ones that over-the-counter medication comes in. They're durable and make a nice popping noise when you hit them. I put them on little stakes in the back yard at about 10-15 yards and shoot at them from my deck. As I got better, I made up little games, like shooting them in a sequence and trying to get 100% accuracy. I find it easy to get back to writing code after doing this for five or ten minutes.
Competitive pistol shooters actually use several different sight picture styles.
In the speed styles of competitive shooting, the goal is to hit targets as fast as possible, so you want to make each shot in the "worst" way that will give you about a 95% chance of a hit. So for a close, low risk target, a shooter may look only at the target and ignore the sights, for a tiniest fraction more speed.
For most targets, the looking at the front sight is correct. Shooters tend to lock their upper body into one shape, then pivot it from target to target while shooting a string. This locks the rear sight in just the right place behind the front one. When the front sight is put on target, the rear sight is automatically in the right place. It's true that the target does become blurred a little when you do this.
Then for really far targets, you do have to bring your focus back a little farther, and see and care about both sights.
The sight picture is not the only thing that changes from target to target. You usually budget the amount of time spent for each shot.
Surprisingly, many pros know where their round will hit before it reaches the target. The time penalty for missing a shot is so high that it's almost always better to take a second shot in case of a miss. However, it takes a while for a pistol shot to reach the target, and for your eyes to see where it landed (plus you'd have to change your focus to look for it, then back again to your sights). To get around that, with practice, you can know in the moment you pull the trigger where the round went, and follow it up in about a twentieth of a second with another round.
In most competitive pistol matches, the sequence of targets to be shot on a given stage is not rigidly defined. There are often plenty of constraints (this group must be shot before these) or timing related constraints in some sports (shooting this target will cause a pair of targets to pop up in 1.2 seconds). Given this, there's a surprising amount of planning that goes into discovering the optimum run. The details of each shot are then worked out and mentally rehearsed.
For the use cases that really matter, you won't be taking well-aimed shots, you'll be trying to get rounds out of the weapon in the general direction of the threat as quickly as possible, in order to buy yourself some time and/or space.
The front sight rule is not just the best aiming mechanism for the reasons of geometry described in the article, it's also the quickest way to acquire a basic sight picture under stressful conditions.
This is still true, but pistol red dot sights are becoming more prevalent.
Unlike camera lenses, our eyes can't easily focus on an arbitrary distance without an object being present there. Perhaps the front sight is working as an approximation of the hyperfocal distance.
is the "two sights" here the rear sight which has two posts, or the two sites as in front sight + rear sight?
several pages of reading and then .. an ambiguously worded conclusion.
Thank god for the HIDE option, you second amendment freaks are everywhere. Back under the bridge you go, losers.
I literally only made an account to post about how absurd and out of place this article is. If I wanted some second amendment lovers blog (and I don't), I'd simply find one.
Strike one, "hacker news". Strike one.
There was 250+ dedicated servers, 2-3 weeks of restoring week-old backups (thankfully they had these weekly intervals kept offline). Mass exodus of clients.
"Ex-employee" used root keys and a boot zerofill drop and rebooted every server resulting in severe data loss. Their online backup systems were also using these keys and we're not spared.
They said they would have to shut down the company as a result, but ended up securing capital and eventually launching what would become digitalocean.
They said it was highly probable that it was an ex employee and that the FBI was investigating buy nothing was released about it.
Good cautionary tale for segregation of credentials and proper user key management.
Mario Savio was a Free Speach Activist and organized a protest to protect the Freedom of Speech at Berkeley around the 60s. In his speech to protestors, he says "there's a time when the operation of the machine becomes so odious... that you can't take part... and you've got to indicate to the people in charge that unless you're free, the machine will be prevented from running at all!" Applied to free speech, this notion of disrupting the functioning of an organization was lauded, because freedom of speech is just that important.
But let's shift to employment. Without employment, it's very hard to survive. And here's a situation where the people in charge has the upper hand in every arena- hiring, pay, work Place behavior... etc. How do we know that the ex-admin wasn't blackmailed by the CEO to come back to work for free to fix something, or future references will be negative? Why are we so quick to side with the employer in this matter when we know nothing of the situation at all? Why do we start calling the employee a felon? He hasn't even been charged yet.
My point is, context is important. Fine, corporations have the power to ruin your life as a deterrent to keep you from acting against their interests, and that's just the way society is. And fine, We're not all rational at every instance of life. The calculus of establishing status quo equilibrium of those two conditions/constraints is hard, but without context to the situation, who are we to decide who's right or wrong? Would you label Mario Savio wrong for protesting and urging protestors to prevent the operation of the college from functioning in the name of preserving Free speech wrong? No, because you've learned the context.
BTW, we had a netadmin interview a few months ago. Guy was really smart, aced the technical and group interview. We were really looking forward to hiring him, and only needed to pass a background and reference check. HR told us in no uncertain terms to run the other way. They didn't share what was in his check but it wasn't good.
And having switched jobs quite a few times, the next one is always better for you, regardless.
As my own company is growing, we fully trust all employees, (limiting only what is essential), but, a dev ops guy if he was so inclined could technically do something like this... It always scares me.
(Btw, IMO there is no excuse or justification for any admin or exadmin to ever do this. Among many other issues is the fact he deleted the data/work of individuals who had nothing to do with whatever "problem" he has with Verelox )
Nothing is foolproof, but anytime you've got constant network access to every last copy of your data, you're begging to lose it. It's the reason why people who think one copy (redundantly dispersed or not) in AWS S3 is sufficient scares me to death. Is it unlikely Amazon would get hacked and have the entire thing blown up? Sure... but if we go to war with China I wouldn't want to bet my company on it.
Otherwise while the vast majority of your staff will be decent people and not cause problems like this, it just takes one angry ex staff member with a grudge to cause problems.
They also need to revise their backup system too. There should rarely if ever be a risk that any data is 'unrecoverable', yet their update says some data will just be impossible to get back.
As for the employee involved... well I hope they like the inevitable lawsuit their selfish, stupid actions will bring them. I don't care what you think of a company you worked for, there's no excuse to destroy their business through actions like this. Also, good luck getting any jobs in the industry after too. Because with this on your track record, no one will touch you with a ten foot bargepole.
So yeah, what a disaster all round.
Some posts from Verelox staff towards bottom third of this forum page search for user name Verelox
I'd like to know more, I think...
Looking at some comments that treat this as a comeuppance of a sort is disturbing. We really shouldn't use anthropomorphization of a company to justify inhumanity in us.
 and I'm seeing those comments being flagged. Very good.
On a personal note, having worked for a period with Travis at RedSwoosh, I think he's getting the raw end of this deal. He's a really great entrepreneur and a nice enough guy. He pushes hard but and plays to win; as the CEO, he's getting an inordinate amount of fallout from these issues. I would say he's not the root cause of the issues but he is the root cause of Uber's success. Uber will not be the same without him.
Validating her word and her integrity would go a long way toward showing Uber is serious about reform. Otherwise, it seems like they are simply taking another very roundabout path to silencing a woman.
Due to TK and his buddy's supervoting shares, he may be able to withstand his oust, but only while there's still money in the bank.
I can't see them being able to raise money on founder-friendly terms again after all this turmoil. Indeed this may be the only way to remove TK's grip on the company, or even altogether.
With a significant part of senior leadership replaced, what will happen to Uber's work place culture? As in, can culture in such a large organisation be changed top down? Are there any examples where an organisation has changed on a short time scale?
I am thinking of the Ballmer/Nadella transition but in my (outsider) perception it took years for Microsoft to be viewed differently as a company. There is also an aspect of bringing in an outsider versus letting an insider take over.
So this will probably happen:
Clear the decks
Put someone boring in charge
Lie low for a while + submarine mode
Announce some new interesting thing
I say the taint is still there, drive the stake in and add holy water.
Where "is likely to take a leave of absence", in this context, is shorthand for "has grudgingly agreed to resign as soon as we can find a replacement -- which, believe us, we intend to do as soon as humanly possible. But we'll in the interim call it 'taking leave' to soften the overall business impact -- and of course to at least attempt to staunch the exodus of the best and brightest of our employees, no doubt already in progress."
How about any customers or contracted employees?
But- uber hired the advice, a point I need to remember
If it can be done as easily as so many in this thread are claiming, then why aren't you a billionaire? (oh I know, you just don't want to be)
It's a repetition of the same things I read on HN when it became obvious that Facebook was going to be a social monopoly. Fantasy: it's easy to clone Facebook, it's really not even that complex, someone should build an open competitor that the public will reject in every way and never want to use. The same was frequently said about Twitter as well, countless clones were attempted, zero succeeded. And it's dramatically harder to successfully replicate & compete with Uber than Twitter.
There are in fact numerous switching costs and extreme barriers to entry, that prevent competitors from just rising up and taking Uber's position as king. Otherwise there would be a dozen Ubers in the US, all vying to be multi-billion dollar companies, making their founders billionaires, and yielding huge returns for VCs.
He either needs to legitimately fix the problems or stand aside and let a better leader do it.
Edit: For the record, I'd much prefer to see him do the former than the latter.
1. The Susan Fowler workplace sexual harassment claims were horrific and sounded true. So it does sound like there are some complete dicks working there. If the CEO was party to this stuff then I'm wrong, but I assume he wasn't.
2. The hatred for the CEO is surely exaggerated. The video of him in the cab with a customer was fine, it was a good robust conversation -- towards the end both parties became annoyed. Perfectly normal behavior. He sounds nice enough to me from the words of his that I've read. Obviously you don't take CEOs that seriously; they're entire job is to place inane positive spin on their company every day that they wake up.
3. The Guardian and NYT so obviously have the knives out for the company that it's become laughable reading their coverage.
4. His mum just died in a tragic accident and people are not even mentioning that in stories about him taking time off.
5. It's become common for people to say stuff like "they're not contributing anything novel", and otherwise completely underestimate the wonderful transformation that they have effected in personal travel and efficient usage of cars.
6. Young American liberals have become so annoying about political correctness causes such as workplace sexual politics that I have really come to hate that aspect of working in America and I am inclined to support Uber just to annoy them (being honest here, I didn't claim my post would be appreciated).
7. The sight of Bernie Sanders-supporting liberals earnestly trying to improve the world by choosing one silicon valley start-up over another would be funny, if the failures of the left weren't so depressing at this time when we need a grown-up left more than ever.
Schema changes happen lazily, with old rows being rewritten on the next update, and a background job doing bulk rewriting.
Scalability: "While reads scale in a nearly linear manner as cluster size increases, writes have a more conservative level of scaling with an absolute limit. The offloading of portions of work (essentially all the WHERE clause predicate evaluations) across the cluster does help scale writes beyond what a single machine can do. Ultimately, this architecture does saturate on a single machines ability to process the low level bplogs."
This doesn't provide the horizontal scaling that Spanner does, CockroachDB aims at, or FoundationDB presumably has.
* optimistic concurrency control (sometimes you need to retry, but often the optimism pays off)
* serializable transaction isolation (something like but not exactly like ssi, rather than 2pl)
* ieee 754-2008 decimal floats
* undo-based mvcc (writers don't block readers)
* group (network) sync replication
* paxos based failover
* lua stored procs
Interesting technology and I'm very happy to see it open-sourced. Kudos to the team. (I used this when I worked there. Few firms can pull off something like this in-house; they could. You wouldn't believe how much data they store in this thing.)
I don't mean that in a derogatory way, I'm just curious what motivated making this.
As you'll notice from the article, Amazon seems to try to enforce the non-compete every few years, I'm guessing mostly as a message to existing Amazon employees. All I've every seen it do is piss off their employees.
That said, it's totally possible to leave Amazon and move on to an actual competitor. You just have to get lawyers involved. Oracle's Bare Metal Cloud org, that I work for, is made up of roughly 75% ex-aws staff (Amazon has been hemorrhaging staff to Oracle because better pay, and way better working conditions)Lawyers on both sides end up negotiating back and forth and you just end up not working on anything related to what you were working on for AWS.
This is a perfect encapsulation of political reality in 21st century USA.
Corporations run every aspect of our government, no one else has a meaningful voice.
I would highly recommend the same to anyone else. Absolutely worth the time and money to be safe and covered if you still live / work somewhere they are enforceable.
When we looked at their offerings and what Smartsheet does, it is two totally different worlds, said Mader. Using Amazons logic, Mader said that Smartsheet would be a viewed as a competitor to Amazon Prime because both services make people more productive.
In my opinion, a non-compete is something that needs to be separately negotiated and compensated, rather than lumping it into "employment." If you agree to a non-compete, you are paid $X in exchange. If you violate the non-compete, you must pay $X back (and there could be a negotiated multiplier, e.g. $3X). In the absence of agreement, the legal default should be 1:1. If you are paid nothing for a non-compete, it is unenforceable. If you are paid $1, you must pay $1, and so on.
This gives each side an opportunity to value and agree upon the non-compete apart from the job itself. Eventually, most industries would settle on standards.
Non competes treat employees like they have no value and are mere slaves or sharecroppers.
I especially like the ones where you have a contract that is for 3-6 months and they want a non-compete for multiple years.
How about this, if a company really wants a non-compete, then make sure they are paid fully above salary, otherwise this is just ownership of skilled labor.
This seems like the sort of thing that should give someone pause when interviewing with Amazon for a job.
The irony is that his boss is forcing this on them because she quit her job and brought a bunch of her former coworkers with her in order to start her business, and she doesn't want anyone to do the same to her.
What it means is my friend essentially can't find another job without either:
A. Moving (which is not an easy choice if you have a family) or
B. Subjecting himself to a significantly increased commute
This means his current employer can essentially take advantage of him, as she knows he won't be going anywhere unless the situation becomes truly unbearable.
I'm often hit up by Amazon recruiters and I've talked to a few before and got the sense (just by their tone) its not some place I want to be working. They also were deceptive in their recruiting tactics.
But let's talk this through. Assuming we all agree we don't like them, what's the alternative? There seem to exist obvious negative consequences of them not existing in any form.
Wouldn't large companies such as Google, Amazon etc be able to poach any employee of let's say a startup competitor, by simply paying way more, therefore being able to steal ideas, technology, etc?
Or even amongst (well-capitalized) companies of any size: a free-for-all for employees within industries, good or bad? Maybe not so bad actually.. what say you?
Seriously asking because I'm trying to envision the positives and negatives of them being outlawed..
I don't like non-compete agreements. They are now far away from their intended purpose. It's wrong to point fingers at a single company, though - the entire sector suffers from this.
I mean wouldnt both conservatives and liberals agree?
Ouch. Going right for the Amazon jugular right there.
Of course the good part about a crowd is all views come out and so th closed source thing has its place but we should atleast give them their due and some kudos. We know people will try to evaluate the implementation and see what happens. In this case it's just a PR article. Let's wait for them to release detail and see if it stands out. May be the protocol is enough to give us confidence that their claim is true. We don't know yet.
TL;DR: Keychain recovery relies on a cluster of hardware security modules to enforce the recovery policy. After 10 tries to guess your PIN, the HSM will destroy the keys. Apple support gates all but the first few of these tries. The paper also implies that you can use a high entropy recovery secret as an alternative, though I can't figure out how you would enable that.
This seems like a pretty reasonable point in design space to me. Of course, you are relying on Apple's trustworthiness and competence to implement this design. But that is true without recovery, since the client software is also implemented by Apple.
For secure, private messages, your sane current options are Signal, WhatsApp, and Wire. Signal is the best option, but you're going to make some UX sacrifices for security. WhatsApp and Wire are extremely comparable. If you worry about implementation or operational security flaws, WhatsApp has the Facebook security team behind it, and a long-term relationship with OWS; no cryptographically secure messenger is better staffed. If you're worried about Facebook seeing your metadata, which is a sane worry, Wire is approximately as slick and usable as WhatsApp with mostly the same underpinnings.
Regardless of the underlying cryptography, in the absence of a well-reviewed published crypto messaging protocol, iMessage is basically just an optimization over SMS/MMS. It's great for that, but it shouldn't be anyone's primary messenger.
If Apple has changed backups to function in a more private manner, then they would announce that, not something exclusive to iMessage.
More detail: iMessage syncing has always been maximally private from day one. However a drawback to the current implementation is that new devices cannot sync message history. The reason is that each message is encrypted separately by senders for each currently registered device for the receiver. And yes that means if you have 3 devices on your iCloud account, whenever someone sends you an iMessage, 3 separately encrypted copies get sent. Apple has gone to great lengths to ensure that private keys are never shared by devices.
So what's new is apparently Apple's figured out a way to sync history via iCloud. I'm interested to hear the implementation details, but there can be no doubt that it still respects the design goal of never sharing private keys.
Now, the privacy goals for backups are different. You obviously want them to be as private as possible, but most people generally want to be able to recover their life in the event of a simple forgotten password. There are certainly scenarios where you want to encrypt your backups, but it always should be an informed, opt-in choice. You should clearly be aware that if you forget your password, you lose your backups. So generally it's desirable to default to having a fallback recovery method.
Like I said earlier, if Apple has figured out a fallback recovery method that somehow does not involve storing your data in a manner they can decrypt, that would be something they announce as part of iCloud Backup... not just for iMessage. But it seems almost a fundamental design constraint. You can either have something impossible for anyone else to decrypt or conveniently recoverable backups, not both.
If they don't, I don't think it is that hard for Apple to extend their current security model to iCloud. They currently rely on senders encrypting messages with each destination device's public key, so they can store the individually encrypted messages separately in iCloud.
When a new device arrives, they could have an existing device perform re-encryption of the messages for it (after the user authorizes that the device should be added).
Even without the new iCloud functionality, Apple has always been in control over the key exchange, which would allow a malicious employee / government to write code that could add a new authorized device/key silently and thus allow Apple to eavesdrop from that point on in future conversations.
Good or bad idea?
color me skeptical.
> "Our security and encryption team has been doing work over a number of years now to be able to synchronize information across your, what we call your circle of devicesall those devices that are associated with the common accountin a way that they each generate and share keys with each other that Apple does not have."
> It's unclear exactly how Apple is able to pull this off, as there's no explanation of how this works other than from those words by Federighi. The company didn't respond to a request for comment asking for clarifications. It's possible that we won't know the exact technical details until iOS 11 officially comes out later this year.
> Meanwhile, cryptographers are already scratching their heads and holding their breath.
This might be uncharitable, but in my mind I think this writing and presentation of facts (probably unintentionally) implies that this capability is novel, when it's not. Sharing keys between multiple devices is a straightforward issue if you're willing to make user experience trade offs. Cryptographers are not scratching their heads wondering how Apple could achieve E2EE with a network of devices, they're wondering how they did it without sacrificing account recovery. It's not clear to me that readers would automatically understand this, because the real head scratcher isn't addressed until near the end of the article, which brings me to my next point:
> "The $6 million question: how do users recover from a forgotten iCloud password? If the answer is they can't, that's a major [user experience] tradeoff for security. If you can, maybe via email, then it's [end-to-end] with Apple managed (derived) keys," Kenn White, a security and cryptography researcher, told Motherboard in an online chat. "If recovery from a forgotten iCloud password is possible without access* to keys on a device's Secure Enclave, it's not truly e2e. It's encrypted, but decryptable by parties other than the two people communicating. In that sense, it's closer to the default security model of Telegram than that of Signal."*
I'm hesitant on how much faith to put in Apple's scheme here. On the one hand I generally trust Apple very highly when it comes to security and cryptography in particular. On the other hand I don't see them making account recovery impossible.
However, over the past few years they have been increasingly pushing two-factor verification, and then full two-factor authentication based on a network of trusted devices. The iCloud password used to be enough to manage the account's security and trust, but now it frequently defaults to requiring authenticated approval from a trusted device (instead of e.g. security question responses).
I could see Apple abandoning conventional account recovery if they keep proceeding down this path by providing a huge amount of access redundancy. For example, they could keep redundant copies of all user data synced in iCloud which are respectively end-to-end encrypted on the client with a user's backup keys. Each authenticated user device might have 10 backup keys, with a typical warning that they should be written down and will not be displayed again, etc. The keys could be downloaded from the device and stored by the user but never given to Apple, and would primarily be useful in circumstances where a user only has one trusted device authenticated to iCloud. Then if a user loses primary access to any given Apple device, the user has two ways to recover data:
1) Authenticated approval from another of the user's trusted devices, or
2) Use the backup keys, which do not provide a method of changing the account password, but which instead decrypt the redundant user data corresponding to the key.
The basic idea is that removing conventional password-based account recovery required inordinate redundancy to counter usability loss; you can do this with redundant authenticated devices (each with their own keys), or you can simulate it on one device with redundant keys that are ideally harder to lose.
> It's unclear exactly how Apple is able to pull this off, as there's no explanation of how this works other than from those words by Federighi.
All Apple says is "end to end encryption". From your phone to the cloud is 2 ends, and then from the cloud to the FBI is 2 more. Yay!
I would recommend having an option to generate keys based on something you have and something you know that you won't easily forget, such as a passphrase. That way you can always recover them later!
1. encrypts her data at the source i.e. on her own computer and
2. sends the encrypted blob over the untrusted network, or so-called "dumb pipes".
Hardware company that makes the users computer tries to dictate whether and how #1 can be done.
The software for doing #1 does need to be open source.
On mobile, does such software even exist?
And even if it does, is a mobile phone really the users computer? It is an effectively locked enclosure with several computers controlled by third parties.
The way to do secure mobile messaging would be to encrypt the message on a computer the user controls, then move the message to the "mobile phone" and then send to the untrusted network.
Alternatively, do not use a mobile phone for messaging if worried about others have access to the messages. Wait for a pocket sized portable computer that can be tinkered with. No baseband, etc.