hacker news with inline top comments    .. more ..    30 Jun 2017 Best
home   ask   best   5 months ago   
Milestone: 100M Certificates Issued letsencrypt.org
554 points by okket  1 day ago   172 comments top 18
koolba 1 day ago 8 replies      
SSL certificate from a traditional provider valid for a year: $10.

SSL certificate from a traditional provider valid for two years: $20.

Automated SSL certificate generation and deployment via LetsEncrypt with zero human intervention and more importantly zero human intervention to renew it going forward - priceless.


That's the real value for me. At $10/cert, that's not even a rounding error. But manually generating a new CSR, uploading it via crappy web form, waiting a random amount of time, proving domain ownership by responding to an email (sent in plaintext), waiting a different random amount of time, downloading the new cert (again usually sent via plaintext email), and finally copying it over the old cert and reloading the SSL conf ... now that costs some serious time and time is money.

tyingq 1 day ago 1 reply      
This is an interesting situation where "public good" happened to align well with business goals of some deep pockets.

Particularly, Google and Akamai...two of the biggest LE sponsors. They both retain good visibility to user behavior (like specific urls visited) because of things like GA,MITM proxying, etc. But, ubiquitous availability of that is taken away from ISP operators.

Which is a good thing. Makes me curious if there's anything else like this that could be achieved. Are there other net public good projects that align well with deep pocket potential sponsors?

jagermo 1 day ago 6 replies      
I think they nail their point with "it illustrates the strong demand for our services."

Letsencrypt is cheap (free) and easy to use.

Even people with not a lot experience can secure their sites and apps, and it just works. Yes, you have to update it every three months, but that's worth the price and the excellent documentation.

Before letsencrypt I always wanted to secure my blog with https but never got around to it or it just looked super complicated and error prone.

Then, my provider built its own tool for LE and its just so easy to implement.

TekMol 1 day ago 5 replies      
I still wonder how Let's Encrypt works.

I understand that the problem they solve: A user wants to get the public key for a certain domain. So he knows he is talking to a server by the domain owner and not some man in the middle.

So he asks a third party whos public key he already has. In this case Let's Encrypt. Ok.

But how did Let's Encrypt get the public key from the domain owner?

I know they make the domain owner install some software on his server. But how does that make sure they talk to a server by the domain owner and not a man in the middle? Does the software include some private key so they can send them a "hey, encrypt this" message and to prove the server in fact has that private key?

If so, why is the Let's Encrypt software so complicated and not just a 5 line script or something?

gator-io 1 day ago 6 replies      
The biggest issue we've had is the short expirations. We have 51 certificates in our organization and do not want to rely on auto-renew.

As our community project, we built a totally free to the public service to monitor certs and alert you when they get close to expiration or are invalid, etc:


Feel free to use it for any certs.

asenna 1 day ago 2 replies      
Genuine question: Are the other smaller cert-issuing services going out of business? If not, what has been their response to LetsEncrypt?

Not that all of them should survive, there are a lot of crappy services that deserved this. But I'm just trying to place myself in their CEOs position and wondering how the game plan should be.

doublerebel 1 day ago 0 replies      
Shameless open-source plug: if you need a automatic LetsEncrypt module that works with your containerized environment, try ten-ply-crest [0]. Works best with Consul where it automatically registers based on service tag, and can securely store certs in Vault, which makes it a great match for Fabio [1].

This way you can run multiple load-balancers and keep your existing infrastructure while still enjoying all the benefits of LE. Written in JavaScript but works with any stack.

Based on the great rawacme, and we'll keep this updated as ACME hits 2.0! [2]

[0]: https://github.com/nextorigin/ten-ply-crest

[1]: https://github.com/nextorigin/ten-ply-crest/issues/27

[2]: https://github.com/AngryBytes/rawacme-node/issues/2

rmc 23 hours ago 0 replies      
Wasn't the Snowden releases part of the motivation for Let's Encrypt? If the web is "encryption for everything by default", then dragnet survellance from the NSA (etc) is much harder.
tscizzle 1 day ago 0 replies      
The graph of HTTPS percentage has some sudden massive dips a little bit ago. Does anyone know the reason for those?
linkmotif 1 day ago 0 replies      
How timely. I got my first LE cert two days ago! I used caddy, a really nice http server and proxy that's HTTPS first [0].

[0] https://caddyserver.com

corford 1 day ago 1 reply      
I love letsencrypt but I wish they would hurry up with deterministic dns challenges. It would make securely automating certificate renewal vastly simpler and easier (especially for certs that are being used for non www serving endpoints e.g. mail servers etc).

I worry that at the moment there are probably a lot of systems out there that leave DNS API keys lying around on endpoints because it's easier to automate the renewal than write a convoluted two stage ansible role/play that delegates sensitive actions to the controller running the play.

michaelbuckbee 1 day ago 0 replies      
LE really fills a big need for cert issuance for massive web hosts with custom domains (WPEngine, Hubspot, Shopify are all on LE) as it both greatly reduces the support burden of setting up certs for all of their sites and sidesteps some of the limitations of the legacy cert structure.
binocarlos 22 hours ago 0 replies      
I've been using kube-lego (https://github.com/jetstack/kube-lego) to automate certs for Kubernetes ingress for the past 9 months. It is a joy to just add a domain to a manifest, kubectl apply and then hit a browser with a certificate working. Thank you LetsEncrypt for making the Internet more secure.
id122015 23 hours ago 1 reply      
I'd like to have the list with those 100 million websites, I didnt know there are so many.
pas 1 day ago 1 reply      
And most of that is because LE [still] doesn't issue wildcard certs, nor does it really plan to :(
halloij 1 day ago 1 reply      
Shame they can't issue wildcard certs. I don't see why they cannot.
pulse7 1 day ago 6 replies      
I would like to get a certificate with 3-years lifetime. I know that 90-days will limit the damage from key compromise, but I don't want to automate...
mavdi 1 day ago 10 replies      
Nearly 20K of them for Paypal phishing sites and who knows how many for others. While a noble intention, one can't ignore the damage they've done.
An easter egg for one user: Luke Skywalker einaregilsson.com
777 points by einaregilsson  2 days ago   131 comments top 20
dankohn1 2 days ago 2 replies      
This is similar to the story of the guy who pranked his roommate by buying Facebook targeted ads that only targeted him [0]. Perhaps in the future, the AIs will know us all so well that every interaction will be filled with little in-jokes.


carrier_lost 1 day ago 1 reply      
This is clever! Also worth praising: This site is free, loads quickly, doesn't require a user account, and has an easy-to-understand privacy policy.
scandox 2 days ago 6 replies      
Well Hamill is a better man than I.

I would hate this. If I imagine being him, this is like having to turn up to the office at 7am on a Saturday for a meeting. It's work. In analogy we're watching Mark Hamill get out of bed on a weekend, groan, drink some Pepto-Bismol and get his best shit-eating grin on his face.

comice 2 days ago 5 replies      
Must admit that this seems a bit creepy to me! I guess he's just amused but I wonder if it starts him thinking, is this game monitoring me personally? How else can I be targeted?
eeks 2 days ago 1 reply      
That's just too cool. I have not read such a faith-in-humanity-inspriring moment in a long time.
kbutler 2 days ago 1 reply      
Wonder how many people got "Dad" and wondered why in the world I'm playing against Darth Vader?!?
bcg1 2 days ago 0 replies      
Tried it out to see the easter egg... not only does it work, but the game is pretty awesome too
erikb 2 days ago 1 reply      
And this is how you do marketing the right way.
bojo 2 days ago 1 reply      
I suppose the more interesting question here is what the author did to actually detect the avatar.
cwbrandsma 2 days ago 1 reply      
Next time he need to put in Batman.
mysterydip 2 days ago 0 replies      
That's great! I haven't put any real easter eggs into one of my games yet, but I definitely will with my next one. You never know who will play your game (well I guess you do if they tweet about it, but you get what I mean).
devgutt 2 days ago 0 replies      
I wish they could have allowed Mark to change the third set by passing his hand over
msimpson 2 days ago 1 reply      
This is why we love Mark Hamill.
_e 2 days ago 0 replies      
If only the easter egg showcased one of his other roles such as cocknocker from Jay and Silent Bob Strike Back [0].

[0] http://m.imdb.com/name/nm0000434/filmotype/actor

lpgauth 1 day ago 0 replies      
Hopefully, this doesn't get shutdown by Hasbro.
simonhamp 2 days ago 0 replies      
Totally worth it!
dmitripopov 2 days ago 4 replies      
It should be noticed that there were the times when Mark Hamill was incredibly pissed off that people only see Luke in him and his acting career was actually ruined by this role. But now he is older and it looks like he is just fine with it. Or may be it's just prozac that makes it look this way.
mikeash 2 days ago 1 reply      
Wow, this place sure is filled with curmudgeons.
kstenerud 1 day ago 1 reply      
It saddens me to see all the comments ascribing victimization where none has occurred.

This is how SJWs operate. Don't do that.

Magic-Wormhole Get things from one computer to another, safely github.com
719 points by lelf  2 days ago   176 comments top 52
ohhhlol 2 days ago 2 replies      
> The wormhole library requires a "Rendezvous Server": a simple WebSocket-based relay that delivers messages from one client to another. This allows the wormhole codes to omit IP addresses and port numbers. The URL of a public server is baked into the library for use as a default, and will be freely available until volume or abuse makes it infeasible to support.

why not make use of https://docs.syncthing.net/users/strelaysrv.html ? lots of servers http://relays.syncthing.net/

mmargerum 34 minutes ago 0 replies      
Some of the golang folks are working on a very interesting project.


bob1029 2 days ago 3 replies      
I would highly recommend looking into this (seemingly-obscure) technique for NAT hole punching: https://samy.pl/pwnat/

It would allow for a "magic wormhole"-style system without the need for a MITM (trusted or otherwise).

bhenc 2 days ago 6 replies      
> Copying files with ssh/scp is fine, but requires previous arrangements and an account on the target machine, and how do you bootstrap the account?~

Assuming that you have openssh and rssh installed, you bootstrap like this:useradd -m -g users -s /usr/bin/rssh tmppasswd tmpedit /etc/rssh.conf and uncomment allowscpShare the password with the party you want to exchange data with. Make sure your ports are open.

See: https://serverfault.com/questions/197545/can-non-login-accou...

The use case I see for wormhole is if you're working purely in the python ecosystem. That's it.

You're free to disagree of course, but I prefer ssh, since it's peer-to-peer end-to-end encrypted,and extends to cover other use cases much more easily (rsync, VNC, etc.).

schoen 2 days ago 2 replies      
This reminds me of the great tool http://www.fefe.de/ncp/, which seems like the same thing only without the cryptographic authentication!
chx 2 days ago 3 replies      
I like using https://file.pizza/ for this.
allworknoplay 2 days ago 1 reply      
The security model here is pretty great assuming you trust the rendezvous server.

Maybe consider an optional challenge/response prompt (like when your pal enters the prompt code, their client generates a second code that they give back to you) to make sure nobody's intercepted the request before them, odds aside (if someone got your initial code somehow, they could definitely man in the middle the request otherwise).

nsxwolf 2 days ago 1 reply      
I would use this just to cut and paste text from my host machine to my VMs, because I've never been able to get that seemingly simple concept to work reliably.
sw1sh 2 days ago 1 reply      
More people should use Keybase, and this would be more easier within its filesystem https://keybase.io/docs/kbfs
nattmat 2 days ago 1 reply      
I see you guys arguing what is easier, wormhole, syncthing, ssh. I'll argue that Keybase is by far the easiest. Just but the files in /Keybase/private/person0,person2
etanol 2 days ago 0 replies      
> Copying files onto a USB stick requires physical proximity, and is uncomfortable for transferring long-term secrets because flash memory is hard to erase. Copying files with ssh/scp is fine, but requires previous arrangements and an account on the target machine, and how do you bootstrap the account? Copying files through email first requires transcribing an email address in the opposite direction

I had similar motivations in 2006 to write a tool to copy files "point to point". So here's my shameless plug:


In my case, cryptography was not a requirement, though.

llamataboot 2 days ago 1 reply      
Waiting for the security nits, but this looks awesome and I have use cases for it every week
pflanze 2 days ago 0 replies      
Interesting to see the various approaches. I've been using a pair of simple scripts myself for ages:


(they use a few other scripts from the same repo) used like follows (only practical when being able to copy-paste, my use case is to copy things between servers without needing ssh authentication between them, but having open ssh sessions into both from the same desktop/laptop):

 chris@a:/tmp/chris$ echo Hello > World chris@a:/tmp/chris$ netoffer World --Run:-- echo jxqtrb7xfq2e4dqy3uitc7986ydj56w59iqu84b | netfetch 15123 chris@b:/tmp/chris$ echo jxqtrb7xfq2e4dqy3uitc7986ydj56w59iqu84b | netfetch 15123 chris@b:/tmp/chris$ cat World Hello
(Uses gpg symmetric encryption underneath.)

api 2 days ago 0 replies      
A similar LAN or virtual network oriented tool:


... and another less magical one:


zimbatm 2 days ago 0 replies      
I'm surprised https://datproject.org/ didn't come up already.

It also allows you to share any data but also deals with incremental updates. The main use-case is to share big scientific datasets that update over time.

kyberias 2 days ago 1 reply      
My life is so much happier with Windows. Accessing files on any computer is very easy. Wannacry and that other nasty worm prove it to the world. :)
jedisct1 2 days ago 2 replies      
pveierland 2 days ago 2 replies      
https://transfer.sh/ is another neat service which allows you to upload a file easily using a tool such as curl and get a shareable link. There was one time when I only had Chrome Remote Desktop access to a machine without root, where this was a convenient way to share some files.

 $ curl --upload-file ./hello.txt https://transfer.sh/hello.txt https://transfer.sh/66nb8/hello.txt

grizzles 2 days ago 1 reply      
Cool. Does the data transit the server, or does it do NAT / UPNP type stuff for direct comms after the initial rendezvous?
mihaifm 2 days ago 0 replies      
Great collection of tools here. I'm adding here my own approach, aimed at getting around proxies/firewalls: the files are encrypted and sent in the body of a HTTP request. The receiving end is a simple nodejs http server that can be started on the fly.


nathan-osman 1 day ago 0 replies      
Users may also want to look into NitroShare. Although it only works on the local network, it uses IP broadcast for automatic peer discovery (next version will use mDNS). It includes installers for Windows and macOS. Debian, Ubuntu, and Fedora include it in their respective package archives. There is also an Android app. (Disclaimer: I am the maintainer.)
aidenn0 1 day ago 1 reply      
Anybody know how the author generated the word-list? I generated one for my own passphrase use; I took the 2000 most common english words and used metaphone to prune any similar sounding words, which got me down to around 600; I then truncated the list to 512 yielding 9 bits per word.
jpillora 2 days ago 0 replies      
See also https://www.sharedrop.io/ (WebRTC-based AirDrop clone)
foota 2 days ago 1 reply      
How is the 16 bit wormhole code secure against brute force?
mbonzo 1 day ago 0 replies      
Hey everyone! I just did a quick video on Magic Wormhole showing how it works: https://www.youtube.com/watch?v=1SXCLAlpD70
daxorid 2 days ago 2 replies      
So, basically, scp (with a 16-bit session key that has to be exchanged oob)?
mbonzo 1 day ago 0 replies      
Hey everyone! I just did a quick video on Magic Wormhole showing how it works: https://www.youtube.com/watch?v=1SXCLAlpD70
astrodust 2 days ago 0 replies      
Extending this to a small text-based GUI with chat could make it a lot more useful, too. Sort of like a person-to-person BBS.
Aissen 2 days ago 0 replies      
Looks like another application the key exchanges algorithms used in Mozilla Weave/Firefox Sync. They used to do that to associate new devices and send them the keys, but they killed that because the UX was more complex than user/password.
curtis 2 days ago 0 replies      
> The receiving side offers tab-completion on the codewords...


popey 2 days ago 0 replies      
This is so handy. I pushed it to the snap store. So if you're on a snap supported Linux distro you can just run "snap install wormhole" to get it and future updates.
abhinai 2 days ago 0 replies      
Interesting. How does this compare to WebRTC based peer-to-peer file transfer like perhaps https://simplewebrtc.com/filetransfer as an example?
jbergens 2 days ago 0 replies      
This kind of tool should probably be distributed as a binary but then it would be a bit harder to know it doesn't do anything strange.

I would probably write it in go to make it easy to compile for different platforms.

dagw 2 days ago 0 replies      
This would be so much more useful it was written in a language that made it easy to distribute the whole thing as a single statically compiled binary.
dak1 2 days ago 2 replies      
How does this compare to AirDrop (besides obviously being cross platform)?
consultSKI 2 days ago 2 replies      
Make mine pushbullet. Love it.https://www.pushbullet.com/
QuamVideri 1 day ago 1 reply      
Seems likely the use of the "Rendezvous" term is going to get you a cease-and-desist just like Apple got over what is now known as "Bonjour" (aka mDNS)


mcrmonkey 2 days ago 0 replies      
There are some handy docker images for this over on the docker hub if you want to try them and dont have/want the needed python setup but just happen to have docker installed
dang 2 days ago 0 replies      
interfixus 2 days ago 1 reply      
Strictly for internal networks, and no crypto in transit, but dead easy - also for grandma - and covering phones as well as the three pc platforms: Dukto. I use it all the time.


reacweb 2 days ago 1 reply      
When I want to give things, I use ssh to put it in the static part of my website in a directory with a random name, then I send the url by mail. My sftp client is already configured with ssh keys. When the things I have to send is a collection of jpeg files, I use fgallery.
JepZ 2 days ago 0 replies      
Not exactly the same, but for those Linux/KDE users who look for a tool to connect to another machine within the same LAN, I can recommend KDE-Connect (file-transfer, notifications, shared-clipboard, etc.).
Thespian2 2 days ago 0 replies      
Trusting the baked-in rendezvous server would seem to be the most obvious security "nit," which could be addressed by compiling and running your own server. But out-of-the-box, that would seem to be a weak-point for MitM attack.
hossbeast 2 days ago 0 replies      
> The wormhole library requires a "Rendezvous Server": a simple WebSocket-based relay that delivers messages from one client to another.
the_cat_kittles 2 days ago 1 reply      
the motivations are great:

"Moving a file to a friend's machine, when the humans can speak to each other (directly) but the computers cannot

Delivering a properly-random password to a new user via the phone

Supplying an SSH public key for future login use"

i had never even thought about those, and this is a great solution afaik

devindotcom 2 days ago 0 replies      
Nice, this looks useful and extensible! I popped it up on TC, I like the idea of transferring files wizard-style.


ReverseCold 2 days ago 1 reply      
I expected something local, like airdrop, but cross platform. (someone, please)
Frogolocalypse 2 days ago 0 replies      
Very simple, handy, and yet powerful utility. Thanks.
cestith 2 days ago 1 reply      
What's wrong with S/MIME? Email doesn't have to be unencrypted.

Also, you only need to send the public part of an SSH key to the remote end to set up future keyed connections to that system.

This seems like a solution with very narrow problems to solve.

dexzod 2 days ago 0 replies      
Interesting. Would it be possible to use it on Android and IOS?
draw_down 1 day ago 0 replies      
Cool, seems like AirDrop which I find very useful.
Is it unethical for me to not tell my employer I've automated my job? stackexchange.com
647 points by Ajedi32  1 day ago   500 comments top 111
imh 1 day ago 10 replies      
This question is a beautiful example of typical incentives workers feel and how screwed up they are. On HN people talk often enough about how if you have a worker who gets their job done in 30 hours instead of the company's usual 40-60 hours, you should give them 30-100% more responsibility, but much more rarely "and 30-100% more pay." Butt-in-the-chair hours are super important culturally and it's been that way for a while. Incentives are screwed up enough we're getting questions like this.

If you've ever thought "I'm done for the day, but I'm going to hang out a little longer to leave at a more respectable time," then you're feeling (and doing) the same kind of thing.

afandian 1 day ago 11 replies      
When I was a student I took a work-from-home job manually generating HTML pages for an online furnishing shop. They somehow had a successful web presence but all sales were done by phone (early 2000s).

They wanted to pay me by the hour, but I negotiated paying by the page instead.

Of course, I automated the job. And surprisingly, at least to nave me, they were annoyed that I automated it. Even though they got the same result for the same money, and we had explicitly agreed to do it by output, not by time.

I learned something that day, though I'm not sure what.

ohyes 1 day ago 4 replies      
I think the real question is whether this person is salaried or hourly.

If they're hourly, then yeah, billing 40 hours a week when you only did 2 is fraud.If salaried, I think it's okay.

Here's why: Individuals in the company will be good or bad, ethical or unethical. The company itself will (likely) be largely amoral and driven solely by a profit motive.

So when this person announces that he's automated himself out of a job, it sounds like it won't be a matter of 'great work, here's a cut of all the money you saved us and some more interesting work.' It'll likely be a matter of 'thanks, here's your contractually required severance.'

That is what it is, but if the company is allowed to be driven by profit motive, he should be too. It is within the best interests of his profit motive to continue with the automated work. For some reason when the person is an employee, it's no longer okay to be a sociopathic profit-motivated machine, we're actively disgusted by this type of behavior.

It seems like there should be a fairness principle in this situation when making a decision about things such as this that treats the employer and the employee as equals in a contractual obligation.

simonh 1 day ago 2 replies      
If he was a company providing a service this wouldn't even be a subject for discussion. It would be a non-issue. Also he's been pretty much instructed by the company not to rock the boat. It seems pretty clear they really don't care as long as the work gets done.

I've known plenty of sysadmins that have significantly automated most of their work and mainly just monitor and maintain, good for them. Nobody ever criticised them for this, in fact it's good practice. Finally he's not really being paid for hours worked. If it took him every hour of that time the first few months,but then he got better at it and later it took 30 hours instead of 40 nobody would care. In fact I'm sure the company fully expects something like that to happen, again they just don't care.

He should stop introducing errors though.

hessproject 1 day ago 3 replies      
There are two ethical lines the poster may have crossed.

> I even insert a few bugs here and there to make it look like its been generated by a human.

As a few others in the original post pointed out, this seems to be the biggest issue. He is intentionally misleading his employer as to the nature of the work he is doing. The automation itself isn't immediately unethical, but the intentional misdirection could be.

The second issue depends on whether he is paid for his time or to fulfill his job duties. In the case he is being paid to fulfill his job duties, he is doing the job he was hired to do adequately, he is meeting the deadlines expected of him by his employer, at the price he negotiated when he was hired. However, if he is being paid for time, it seems clearly unethical to bill the company for 38 more hours than he worked.

renlo 1 day ago 2 replies      
I graduated in the recession without any real skills or an applicable / usable degree (lib arts in a language I could barely speak).

The first job I got after college was for data entry where I was expected to go to an email inbox which received some automated messages with some strings in them and to copy these strings and paste them into an Excel spreadsheet.

I was expected to do this for ~6 hours a day every day. Sitting there, copying and pasting strings from some email. Then this spreadsheet would be forwarded to my boss who would forward it to some other people (I don't remember who these people were, probably for auditing of some kind).

After a couple of weeks of this I really started to hate it. I had taken a class on spreadsheets when I was a kid and knew that there was a way to automate it all, so I did a couple of Google searches and figured out a way to copy all of these numbers automatically. It was done using some VB script IIRC and some spreadsheet formulas.

I stupidly told my boss. So now he had me doing other stupid and mind-numbing work for those 6 hours I would have been copying and pasting strings from the emails (like manually burning hundreds of CDs one after the other with Windows XP and a CD-burner which only worked half of the time).

I quit a week or two later, but learned a valuable lesson. Don't tell your boss. Side note: this is how I became interested in pursuing programming as a profession.

It would be great if there was a means for people to sell technology like this to their employers, for those rare cases where someone goes above and beyond the expected solution. In reality employers don't care because they own your output regardless so why do they need you?

Bakary 1 day ago 1 reply      
This ethical question seems bizarre in a world where large blocks of the economy rely on effective misrepresentation or information asymmetry (advertising, etc.) and wealth itself is concentrated in the hands of a few. Those are stereotypical and clich statements to make but I don't think that makes them less relevant.

As far as I am concerned, this person can provide for his family, and has given the company the results they want. I don't see how it's a problem.

The "late-stage" power imbalance in favor of companies does provide interesting ethical arguments in my opinion.

bjourne 1 day ago 2 replies      
What's wrong with us workers? Do you think the Apple executes have some secret message board where they ask questions like: "Is it unethical for us to sell iPhones for $800 when they only cost $20 to produce?" Capitalism is what it is, you play the game and shouldn't feel bad the (few!) times you win.
protonfish 1 day ago 4 replies      
I did not expect to see so much contention about this. A company is paying him to do a job, and he is doing that job. Is the problem that he isn't miserable? This baffles me.
smallgovt 1 day ago 1 reply      
This may be going against the grain, but I think the real question the OP needs to ask is whether he really cares whether what he's doing is ethical or not.

Evaluating decisions like this really comes down to understanding your values and owning them.

Values are the measuring sticks by which people quantify success in life. Whether or not we realize it, we constantly measure our actions against our values, and how we 'measure up' determines our self-worth.

In this specific example, there are two conflicting values: integrity and family. They are in direct conflict, which is putting the OP in a stressful situation -- acting in the most honest way here will lead to a worse life for the OP's family. Creating the best life for his family requires that he must lie.

So, the OP needs to ask himself:Do I value integrity? Do I value my family? If I value both, which do I value more?

Personally, I don't think there's really a right or wrong answer to these questions. There's no intrinsic value in the universe -- but assigning value is part of the human condition and we feel fulfilled when we lead a life of purpose (however arbitrary). When faced with difficult decisions like this, it's important to be aware of what your values are. The 'right' decision for you will be apparent.

danjoc 1 day ago 2 replies      
You should tell the company. There's probably a $20 gift card in it for you.


hedgew 1 day ago 1 reply      
The way I see it, this is the employer's problem. In a good company, what benefits the company, also benefits the employee. In this case the employee and the company have different incentives, and the company does not care enough to solve the problem of incentives.

There are many, easy ways the employer could solve this problem so that both parties benefit. The employee does not have the same ability to pursue mutually beneficial solutions, and is acting like a normal profit-seeking business would.

gsdfg4gsdg 1 day ago 1 reply      
It's horrifying to see people questioning the morality of their perfectly legal and adequate method to collecting a paycheck. Their employer has 0% loyalty to them. Their employer would stab them in the throat for 50 cents. And here is the employee asking if his sweet arrangement is ethical.

That's how hard Americans have been brainwashed into the idea of corporations and business as "Good" -- that a man is asking whether spending 38 extra hours a week with his son is built on an "unethical" foundation.

jstanley 1 day ago 1 reply      
Neither he nor the company would be better off if he were still doing it manually.

I don't know whether I think it's ethical or not overall, but it's at least a more optimal situation than if he had continued spending 8 hours a day updating spreadsheets by hand.

He's doing a better job than he was before, for the same price, and he gets more free time. Everyone's a winner. Admittedly, he is a winner by quite a bit more than they are, but he would have been perfectly within his rights to continue doing the work manually. Then they'd be paying the same price as they are paying now but getting work with more mistakes in it. Why would they want that?

pmoriarty 1 day ago 1 reply      
What is considered ethical or unethical always depends on who you ask.

Ask most slave owners a few hundred years ago if it was ethical to whip slaves (or even own slaves, for that matter) and you'll get one answer, but quite a different answer from the slaves themselves.

You'll get different answers to this question whether you ask it of employers or employees, capitalists, socialists, or communists, people who feel exploited or the exploiters themselves, and so on.

I'm not sure how much one could make out of such a survey other than on controversial issues there are great differences of opinion.

peterburkimsher 1 day ago 0 replies      
Is it unethical to keep a chair warm when my boss didn't give me new tasks to do?

For other areas of life (immigration), I need to get more years of continuous relevant work experience.

I come to an office every day, but my boss just doesn't have enough to keep me busy. My job title is "Project Engineer", which is vague enough to cover everything from DLL debugging to Node.JS programming to network monitoring to evaluating Advanced Planning systems. The latest task is to do some online course in machine learning, even though he didn't specify how the company will need it.

On bad days, I feel useless. But I reconcile the situation to myself by saying it's basically a "basic income" (the salary is not high; the minimum that people on my visa can have). I could think about changing after I have the years of work experience, but years just come with patience, not with productivity. I feel like my situation isn't "fair" because my friends are so much more stressed, but I need the years, not the results.

I also do a lot of side projects and post them online (e.g. learning Chinese - http://pingtype.github.io ), but my contract and visa specifically state that I can't have any other paid work. So all my projects must be free and open source.

If the author of the automation scripts wants to comfort his conscience, I suggest reading more about Basic Income theories.

dgut 1 day ago 1 reply      
You get to spend more time with your son. In a country with a terrible maternity and paternity leave policy, it's morally right to do whatever possible to spend more time with one's children. You are doing a great service to the country, as your child will turn out to be a more mentally healthy adult. Just for that reason (besides that you are providing value to your employer), keep going!
xenophonf 1 day ago 2 replies      
Once, long ago, I did two weeks' worth of work for a multi-person team in about a day, thanks to a little sed/awk magic. The work would have gotten done a lot faster if I didn't have to deal with the completely shitty X-over-dialup remote access setup they forced me to use. The project manager was actually upset with me because now we couldn't bill the client for 80x5 hours worth of work or whatever it was. Needless to say, I quit that job the following week. It's one thing to have a little downtime now and then to recharge oneself. It's quite worse to be bored because there's nothing fun/interesting/useful to do.
mikekchar 1 day ago 0 replies      
I love questions like this, especially when the person replies to the reactions. I'm less interested in the answers as I am to the question, "Why did you post the question?" There are lots of people saying that they think it is unethical, and the OP has taken time to respond to these reactions with a rationalisation.

In other words, the OP feels guilty and is seeking permission to continue with the course they have already chosen. They feel they won't get it from their employer, so they feel the need to find the permission from random strangers on the internet.

I've done a lot of process improvement in my career and this is always the trickiest bit. People make decisions and build elaborate walls to protect them. Exposing the decision does nothing to remove the walls -- it only prompts the builder to design even more elaborate walls. It pays to be sensitive to this!

awjr 1 day ago 0 replies      
This reminds me of this joke https://www.buzzmaven.com/2014/01/old-engineer-hammer-2.html :

The Graybeard engineer retired and a few weeks later the Big Machine broke down, which was essential to the companys revenue. The Manager couldnt get the machine to work again so the company called in Graybeard as an independent consultant.

Graybeard agrees. He walks into the factory, takes a look at the Big Machine, grabs a sledge hammer, and whacks the machine once whereupon the machine starts right up. Graybeard leaves and the company is making money again.The next day Manager receives a bill from Graybeard for $5,000. Manager is furious at the price and refuses to pay. Graybeard assures him that its a fair price. Manager retorts that if its a fair price Graybeard wont mind itemizing the bill. Graybeard agrees that this is a fair request and complies.

The new, itemized bill reads.

Hammer: $5

Knowing where to hit the machine with hammer: $4995

odammit 1 day ago 0 replies      
I worked in data entry at a large hospital in the late 90s. I automated my data entry of reports someone was printing from Excel and that I was entering it into another system.

My boss walked by one day and I was reading the Monstrous Compendium and she asked "what are you doing?" To which I responded, "uh reading the Monsterous Compendium"... then explained I automated my data entry by having the people upstairs because bring down floppies with spreadsheets on them instead of printing the reports "to save paper".

Curiously I didn't sign any paperwork when I started regarding intellectual property and I'd written the app on my computer at home... sooooo, I got a bonus and a promotion to the IT department!

They fired the rest of the data entry team :(

dewiz 1 day ago 0 replies      
A company automating jobs and firing people is called progress. A person automating his job without being fired is sustainable progress.
timthelion 1 day ago 0 replies      
Recently, I paid two guys $150 to cary 4 tonnes of gravel up some stairs into my garden and level it. I figured it was a days work for the two of them, and that the price as reasonable. But then, they ran the whole time, and did it in half the time I had estimated for the job. And emotionally, I felt really ripped off, because I was paying twice the going rate for workers in my country. But WHY SHOULD I FEEL RIPPED OFF???? It is wrong to feel ripped off in such a situation. Their doing it quickly saved me time and stress as well.
stmaarten 1 day ago 0 replies      
This case is a microcosm of a fundamental tension. Namely: how should we divide the pie between capital and labor, if baking the biggest pie requires devaluing labor? There are explicitly positive and normative components to that question. Positive analysis cant resolve normative questions, and vice versa.

Personally, Im not interested in questions such as whether the OP has been dishonest, or what the status-quo legal regime would prescribe. I am interested in the underlying economic reality. The OP has developed a technology with real and quantifiable value. He created wealth. So: who should keep it?

At the macro level, I think its pretty clear that the existing economic and legal regime would have these gains accrue to capital owners. After all, markets (when they function) do a good job of allocating resources according to value signals. But that's just a default allocation; that doesn't tell us "who should keep it".

dragonwriter 1 day ago 0 replies      
It unethical to deliberately introduce errors. If you have broad discretion about how you do the job, it may not be unethical not to actively call your employer's attention to your automation, though (but for the deliberate introduction of errors) it should, with a reasonable employer, be beneficial to do so.
Posibyte 1 day ago 0 replies      
I think the problem is actually deeper than whether or not it's ethical, but rather the structure in which we place people gives them more incentive to hide their improvements rather than expose it and help the company flourish. Why should OP ever reveal it to his boss? Ethics? What do those matter on the bottom line for them? He could be fired or disciplined. His experience might be positive, but judging from the comments and how people are reacting to it, I wouldn't be very sure of that.

In this case, I think a positive of some sort to give the employee a reason to reveal this automation. People shouldn't be afraid to tinker and learn in the face of punishment.

lr4444lr 1 day ago 0 replies      
There's nothing unethical with this situation as the poster describes it. Is it unethical for you not to tell prospective buyers of your house what other offers you've received? Is it unethical for you not to tell the other side of a legal trial what character, logical, and emotional arguments you intend to use to sway the jurors?

No, some relationships have an inherently adversarial zero-sum component, and maintaining informational asymmetry could only be unethical if the other party will bind him or herself equally to not taking advantage of your sharing it. And speaking realistically, there isn't a snowball's chance in hell a middle manager of a large company with legacy systems would not fire this guy if this information got out and he were told to directly or was generally pressured to keep down department costs.

outworlder 1 day ago 0 replies      
I'd say, get rid of the intentional errors. If anyone asks, it's just because they became so good at their jobs that the work is now spotless. Which, frankly, it's the truth: business requirements were discovered in such detail that automation could be performed.

I don't think it is stealing. In fact, they are getting exactly what they asked for the job is getting done. The fact that it is taking less work (but it is still taking some work, he still needs to do clean up before running the automation) should be irrelevant if it is his only task.

This is assuming there are no specific instructions on how the work should be performed.

If it were a silicon valley-type company, then it is possible that this contribution would be properly recognized and the employee offered another position due to the demonstrated skills. From the looks of it, it's unlikely to happen.

So here are the choices:

Not disclosing, and getting into philosophical arguments on whether or not they are being overpaid. Depending on the complexity, this is the kind of thing that consulting companies thrive on and charge big bucks for. So, in fact, they may even be UNDERpaid, if this is eventually disclosed and becomes company property (maybe when they decide to leave the company?).

Disclosing, will force some tough conversations to happen. They will probably want the software, which they are entitled to, as it was done on company's time. And, once they have it, there's nothing preventing them from firing the person.

And, to be fair, companies do that sort of stuff all the time. They may start doing things manually for customers, figure out some monetary value they should charge to cover costs, plus profit. Eventually things get automated. Do they reduce their prices? Of course they don't. Cost optimization and the like.

EDIT: typo (also, using gender-neutral pronouns is tough)

empath75 1 day ago 1 reply      
five years ago, I had an entry level overnight noc position at a big company, and within 6 months I had scripted almost everything and was watching Netflix most of the night and didn't make any particular effort to hide that I had nothing to do.

I got rewarded for it with a promotion, and then I did the same thing and got another promotion, and another. I'm making more than twice was I was making before and now my job is telling other people how to automate their jobs away.

I keep scripting annoying tasks because I'm lazy and get rewarded for it with more annoying tasks and more money.

If he had just told his boss, and put what he did on his resume, I'm sure he'd be making more money today and have more interesting work than he would have if he hadn't lied.

whoami24601 1 day ago 0 replies      
People here seem to be generally in favour of the OP, unlike the top-contributors on stackexchange. I too think that employers "naturally have the advantage of a power imbalance" (by keinos) and in my opinion they often take that advantage.

As OP I would think about two options in that situation - although I'm not sure if I can judge that well, since I don't have a child. In both cases though I would stop faking bugs.1) Once OP stops pretending to work a full time job the employer might be smart enough to realize that said OP has more capacities and thus might provide him with more work. From my point of view it's not the employees fault that the employer does not know what's going on with the capacities. They don't give you enough work, why should you pretend to work?2) OP could be pro-active and inform employer of his increasing capacities. Maybe they provide OP with new work.

It might be that the employer requests the automation-tool later on, but maybe it could be that the employer overlooks the free capacity aswell.

hsod 1 day ago 1 reply      
Ethical behavior generally requires honesty and forthrightness. If you are only concerned about your own ethical behavior, you should tell them. Keeping it a secret is effectively lying.

If you want to do some ethical calculus, you can probably quite easily determine that your employer (or the general economic system) is less ethical than you keeping this a secret, which may give you some "ethical leeway" when dealing with them.

Furthermore, you could determine that your employer is likely to behave unethically towards you if you told them, in which case you may be able to determine that keeping it a secret is a net-positive ethically speaking.

But yes, it is unethical to lie to your employer about how you're doing your job.

SubuSS 1 day ago 0 replies      
All software rots: If they had the ability to run the script this engineer built, there is a high chance the same folks would've noticed the automatability of the job.

IOW - I don't understand how the user thinks this will be taken away from him though. It would seem he is a core part to the execution of said script considering he has to adapt it to new data rules etc. once in a while.

IMO - The fact that he is spending 2 hours and billing 40 is deception though: I mean, in an ideal world, the company would totally notice if they estimated an assembly line to produce X items in Y hours and it actually ends up producing 2x items.

Now whether you can engage in said deception, whether everyone else is directly/indirectly doing it, your family situation etc. all lie in the zone of subjectiveness. You just gotta trust your gut and go with it. But one thing is sure though: You get caught, you are getting a reaction - fired/possibly worse. The hr/human ego is far too fragile to let this go in 99.9% cases.

nthcolumn 1 day ago 1 reply      
As software developers we implement change which often mean others lose their job. I've worked myself out of more jobs than I care to remember, automating ruthlessly, fixing even when it meant I was redundant as a result. To not do so would make me guilty about all those systems I wrote which made others redundant. That's really what I.T. was for years ago. Was a time I was like some horrible spectre. If you saw me that meant yo' ass. Once I interviewed some users about some task they were meant to be rekeying, they hadn't done it for months as the old requirement had gone. I followed it back to the person who was sending the first part, a nice little old lady and told her gleefully she didn't need to do that onerous first collection task anymore, whereupon she informed me that it was literally all she did. I just left.
teekert 1 day ago 0 replies      
If you were a company, no one would bat an eye, they'd say your employer is free to scour the market for the best options, they found you and are happy. I think many companies provide services that are easily automated and customers don't realize how little human labor is actually involved.

You could offer something like: "Hey, I can rewrite the entire system, make it completely automated. This will cost you ((time_it_takes_find_good_job + some)*your monthly pay). After that I'll be gone. hat you already did the work doesn't really matter imo and you leave your boss better of, and hopefully yourself too.

pinewurst 1 day ago 1 reply      
How is this the smallest bit unethical?

The person is paid to do a job and that job is done, and seemingly well. End of discussion.

Spooky23 1 day ago 1 reply      
If you're a salaried employee assigned a task, you fulfill the task, and are available to respond to requests during business hours, there's no ethical issue here.

If you're hourly and you are spending 3 hours a week and billing for 40, you're in bad ethical place in my mind.

tfont 2 hours ago 0 replies      
So you mean "Is it ethical for me to tell my employer I've automated my job?"
dlwdlw 18 hours ago 0 replies      
One thing to realize is that at the higher levels, it's accepted that salary is basically a retainer. It's payment for the option to ask for work but not an obligation. This is truer the more "creative" or "strategic" your job is. It is known that specific tools need to be used in specific ways at specific times.

However work culture is so ingrained that things devolve into chaos if this is openly said. Games are created so that everyone has something to do mainly so everyone feels equally important.

The behavior itself is OK as long as the game isn't threatened. As long as you aren't actively destroying something anyone above you has created and can produce when called upon, do what you want.

In practice this may mean appearing to do nothing all day but this being OK because you give a script automating seminar every 2 weeks. Or maybe changing work spaces every now and then so when missing you have the benefit of the doubt.

If your level in the org is so low though that you have 5 managers above you all believing in the 12hr work day scheme then you are very limited and will most likely be punished. In most software orgs though this isn't an issue as the "new" culture around thinking work is more accepted.

baron816 1 day ago 1 reply      
I had pretty much automated a past job and when I quit to try to become a software engineer, they asked me for advice for hiring my replacement (what they should be looking for, etc.). I told them not to hire anyone and that I had hardly been doing anything for months. They hired someone to replace me anyway.
mnm1 1 day ago 0 replies      
The only ethics in business are those enforced by the state. Is it unethical for the state to ask a salaried employee to work more hours? No, it isn't, so I see no difference here. The exact same principle applies equally in both cases. Hours are meaningless if he's salaried. I think it'd actually be unethical to himself if he told on himself to his employer. Our first ethical duty, after all, is to ourselves.
keksicus 1 day ago 0 replies      
How ethical is it to tell your boss and then lose all that time with your son. What ethics do you care more about? Making sure your boss gets all the time he thinks he's paying for out of you? Or your son getting as much time as you can give him? Are you more loyal to your boss than your son? If your ethics are driving you to cuck-out and screw yourself, it's time to delete your "ethics" and install new ethics. There is such a thing as fear of success, I'm hoping you don't have that fear.
greggman 1 day ago 1 reply      
I'm just guessing but there is a legal concept call "duty of loyalty". https://www.google.com/search?q=employee+duty+of+loyalty

The short version of which is the duty of loyalty requires that an employee refrain from behaving in a manner that would be contrary to his employers interests. That probably means what he's doing isn't okay but of course laws are different in every area and IANAL.

It's would be arguably different if he was a contractor I'm guessing.

EdgarVerona 1 day ago 0 replies      
Reading this, it felt pretty strongly like the person wasn't asking for genuine input as much as they were asking for permission from strangers. He defends the "don't tell" side of the options with a fervor that strongly suggests he's already made up his mind but needs peer approval to assuage his guilt.

I can't blame the guy. I've lived in areas where tech jobs are thin on the ground. But what I would do if I were him would be to start looking, and try to find a new job as quickly as possible so as to minimize the amount of time in this state. I can understand a fear that it may take a while to find a new job - and if he has that fear, he should start looking now instead of assuming that coasting like this is okay.

lordnacho 1 day ago 3 replies      
I'm surprised nobody has suggested he takes a second remote job. He's looking to send his kids to college after all.

I don't see the problem with doing your job super efficiently. Adding bugs is just "a duck", not a real productivity loss.

I don't see how you can claim the company wants hours rather than work done.

mch82 1 day ago 1 reply      
Just for fun let's flip this around: Is it ethical to keep doing the job manually if you know how to automate it and not tell your employer?
carlisle_ 1 day ago 0 replies      
OP asks "is it unethical?" in his question and proceeds to ignore the ethical issues raised by commenters. Sounds like he was just looking for validation to keep doing what he's doing. I would concur with the person that labeled it more humblebrag than question.
drej 1 day ago 0 replies      
Do you know how John Oliver's "cool" remark, when he's being sarcastic? That pretty much describes every single response I got when I told my superiors I automated (a part of) my job.

I don't expect them to pop a champagne, but they could say something along the lines "this is interesting, what else could we automate?", but it's usually more like "cool, here's more work".

It doesn't deter me from telling them in the future. Maybe someone will appreciate it one day. Maybe not.

notadoc 1 day ago 0 replies      
I don't see much of a problem.

You are paid to do a task. Is the task getting done, and at the expected quality level? That is what matters, is it not?

Aside from that, if you can automate your job, you could likely create a service or product to sell that automation to that employer...

auserperson 1 day ago 2 replies      
Is capitalism ethical though? Isn't every employee an exploited person?

What is the robbing of a bank compared to the founding of a bank? - Bertolt Brecht

getdatpapersong 1 day ago 0 replies      
Using a throwaway because people are going to throw a hissy fit.

Dude, you have family and your _ONLY_ responsibility is towards them. Period. How is this even a question? Get paid, during your "working time" learn something extra and increase your earning potential. You're in a very unique advantageous position, seize the opportunity.

You're doing your job, you don't have to be some schmuck too.

holografix 1 day ago 0 replies      
In my point of view, no it's not.

Your employer's organisation's sole purpose is to make a profit for its shareholders. Unless you are either one of the shareholders or will be paid more for increased production you have no incentive to produce more.

Bar your personal relationships of course, if your boss is a great person and you feel like doing them a favour then that's an incentive.

If you believe you could be paid more for increased production, via a raise or promotion, discuss this with your boss in the form of:

"I have an idea, which I need to spend X hours working on and I'm fairly certain I can get it to work and it would provide Y% more productivity. If I do raise my productivity by Y% what would this mean to me?"

If they state something attractive as the outcome, get it in writing. I interpret this as basically the company paying you for your IP, specially if your automation can be replicated to other employees.

Now if your sole job is automating stuff / increasing productivity at the organisation... then that's a whole other story.

Just remember that if YOU automated your job, the organisation could ALSO do it and not need you anymore - so maybe use the extra time to find a job not easily automated.

DogPawHat 1 day ago 0 replies      
Right, look, This guy is doing everything that is being asked of him for the price agreed. None of the more wonky ethics of the situation change that, and I don't subscribe to the belief that he owes the company anything more then his fair-priced labor.

The only thing he is doing wrong is under-utilizing his own talents and potential productivity, for which the optimal solution is for him to find a better job. As he seems to indicate that the current options are to stay working 1-2 hours of work or be highly to be unemployed, I believe he seek to preserve his employment in future actions he may take regarding disclosure, and wait for a better opportunity to present itself.

If it's ultimately a choice between providing for him and his son and not, its pretty much no choice at all, ethics be damned. I know which outcome I would prefer.

stuaxo 1 day ago 0 replies      
I did exactly the same thing in a data entry job, after the 2001 internet bubble burst.

Semi-Automated a highly repetitive job that took 5 minutes per document to process, down to under 1.

Once we changed jobs, I went to the IT department, they were not happy with people outside their department automating things and had a similar project already.

In the end, a year later my manager was replaced by another that was his ex-wife and suddenly the fact I was wearing trainers to work was an excuse to let me go (though a lot of data entry people did).

It may or may not have been down to the fact, that with my automation, her department would potentially be 1/5th of its size.

That company no longer has their large offices in the town I was in, with inefficient manual processes involving lots of paper.

The good thing that came out of it, was pushing me towards software development.

peterburkimsher 1 day ago 0 replies      
An appropriate comic from Poorly Drawn Lines:


"Welcome to work. You'll spend your time here in two ways: overwhelmed and underwhelmed."

"Is there a third option?"

"Well, there's 'whelmed'. But I'm not sure if that's a word. So no."

janxgeist 1 day ago 1 reply      
So far (10 years) these rules have always worked out for me in the long run:

1) Your loyalty belongs to your company. Always do what is best for your company.

2) Always share your knowledge freely.

3) Never strategize in order to "secure your job".

4) Always pick the project or job where you will learn the most (grow the most as a person).

I would guess 90% of people I have met ignore this and start strategizing at some point. They seem to always lose in the long run.

"The company treated me wrong, so why should I work as efficient as I can?"

"I can't teach him EVERYTHING or my job won't be as important/secure any more."

"I will pick this project, because I have done something similiar already, so it will be easy work."

When sticking to 1-4, relevant people will notice eventually and your trajectory will go up.

When ignoring points 1-4, relevant people will lose respect for you. And even worse, you will lose respect for yourself.

This is just my opinion or my experience so far.

scarface74 1 day ago 0 replies      
I have mixed feelings about the post and what I would do.

1. In my current situation, having a wife that has a job with pretty good family health insurance, living in an area (not SV) that has a great job market, and with in demand skills, my first thought is that I would look for another job and explain that I automated myself out of job. That would be like saying I was laid off from Netflix because they didn't need me anymore after I lead the transition from hosting servers on prem to AWS.

2. But he isn't in that position. He needs to work from home to stay with his kid and according to him

Most likely they can walk out of their silicon valley office and shout I want a job and get 3 offers to start the next day. Unfortunately, there are places in the country that just arent like that. Im not trying to have a go, Im just saying that the situation absolutely does matter.

If I were in that position would I voluntarily tell them I've automated the whole thing? I'm not sure. Hopefully I would not intentionally add bugs. I would definitely be using the time to study and keep my skills up to date.

nemo44x 1 day ago 1 reply      
He should approach management and offer to automate this process for a lump sum of x-years pay. If they plan on using this method for 5 years then offer 4 years of salary for this tool.

Everyone wins. They save a years salary and don't have to deal with data entry errors. He wins because he is paid and can continue to earn more.

This is an example of automation and capitalism revealing their best features.

mcrad 1 day ago 0 replies      
I have worked at enough places where "appearing busy" is rewarded far more than being efficient and truly productive. This seems to be where we're headed as a middle management society and it sucks. Ethical behavior would mean doing whatever you can to reduce such perverse incentives (venturing a guess that means keep your mouth shut in this case).
buremba 1 day ago 1 reply      
If he's getting paid for the result, then it's fine but most probably they pay him based on the hours that he works and he fakes it in order to get full wage. That's not ethical as stealing is unacceptable and even if your children are starving.

I would probably tell my employer that I wrote the software during weekends and it's done last week (Since I would probably get fired if I tell them the truth and I will live with the unethical side of this), and it also avoids the human verification which means for them to just get rid of the verification. I would start a company with the the software that I built and offer them monthly based subcription fee to get their work done. You will still getting paid and you cana also sell the software to similar companies.

If he doesn't want to deal with starting company instead spend time with his children, then he can find a business partner that can do the things other than product.

nehushtan 1 day ago 0 replies      
Like others have said, there's unethical deception involved in inserting arbitrary errors - especially to make it "look like its been generated by a human".

But my feeling is that in addition to paying the OP to "do a job" the company is also paying him/her (him from now on) to "be on call". Yes, they want X results but they also are paying a salary so that they can tap him whenever they need to. This aspect of the job is referred to when he says there "might be amendments to the spec and corresponding though email".

To some companies (especially those with very little other in-house expertise) having "the computer guy" on call to handle all of that mysterious stuff is worth a great deal of money. The company could consider it their insurance against catastrophe.

Nevertheless I would say the OP should come clean at the next performance review.

the_watcher 1 day ago 0 replies      
In terms of feeling uncomfortable with it ethically, but also being concerned about finding another job, couldn't the OP just dedicate some of the free time to finding another job that allows him to continue to live the remote lifestyle he desired when he took the job, then let his current company know that he's created software to do the job he was hired to do and has been testing it over the past X months to iron out the bugs? Then he could give them the option of just keeping the software and not him (without anyone employed who can fix any bugs that might arise in the future) or allow him to continue on in the current arrangement (perhaps negotiating a lower compensation figure in exchange for him running everything in just an hour or two a week and then supplementing that with the new job)?
whiddershins 1 day ago 0 replies      
I think the discussion ignores the labor law regarding salary employees. At least in New York, a salaried employee must be paid in full for any day he works any part of, at least that's my understanding. In general, the law tends towards the position that salaried employees who are except from overtime are also except from being docked pay for missing an hour of work on any given week. (Although I believe the employer can take it out of your vacation time etc)

So, at least from a legal standpoint (IANAL) my understanding is as long as the poster takes even five minutes a day to verify his work, he is performing his duties as a salaried employee. It is up to the employer to determine if he is worth his salary or not.

mathattack 1 day ago 1 reply      
From a legal standpoint the company owns the automation. You need to tell them. They pay for your time and the IP you create.

An enlightened company would entertain your offer to deliver the same value as a fee for service at a discount to them. (You would incorporate)

matt_s 1 day ago 0 replies      
The poster there is referring to current state of things being automated. He is the expert of that particular system. If there are changes upstream, his automation will fail and if he isn't there, then what?

When upper management changes and someone would like to change the system or business process it supports they will need him.

bayesian_horse 1 day ago 0 replies      
My recommendation would be to ask a lawyer if the employer has the rights to the software he wrote to automate his job. Depending on the answer, either offer the software or offer to write the software as a negotiated one-time purchase.

To find a basis for the negotiating, consider the salary for the foreseeable future and present-value that income.

The result would most likely be a happy employer, and an ex-employee with a lot of money in the bank who is now free to find any other job or move wherever he/she wants. Maybe even with the same employer.

Hasz 1 day ago 0 replies      
They want x amount of data processed, and are willing to give $y to do it. If the OP has found a way to offer x at a much lower price than estimated, good for him.

People make shitty deals all the time, he is under no obligation to tell tell them to fix it. The relationship is contractual and nothing more.

hateful 1 day ago 0 replies      
This describes my first job. We ended up automated everything and I ended up getting hired as a programmer.
rayiner 1 day ago 2 replies      
I'm surprised nobody suggested going back to doing it manually. If he can live with the moral compromise of what he's already done, he can eliminate any concern on a going-forward basis simply by deleting his script and doing things the old-fashioned way.
thirdsun 1 day ago 0 replies      
I understand that the company told him to not mess with the system, but why not show them that you found a way to automate the process without admitting that he's been using it for a long time.

Maybe I'm too optimistic or naive but after successfully showing them that it works and saves time the conversation could move on to optimizing other tasks and problems the company surely encounters. Instead of letting the guy go I could easily see how they find additional value in him in other areas.

rcthompson 1 day ago 0 replies      
I think the practical thing to do is for them to assume that this won't last forever and start using some of that spare time to improve their employment prospects, i.e. looking for another job that isn't in danger of evaporating if anyone looks at it.
nandemo 1 day ago 1 reply      
If you believe in the Gervais Principle,


then OP is a "loser", and most answers divide into "losers" ("coming clean will be worse, so just keep doing the bare minimum") or "clueless" ("it is unethical to mislead your corporate masters"). I'd like to see what's the "sociopath" answer.

slim 1 day ago 0 replies      
If he had a boss, reselling his work, and making those margins instead of him, he would not have this ethical dilemma. Somehow the ethical aspect of work disappears when there is even the slightest layer of sales above it
toast42 1 day ago 0 replies      
As a follow-up to this question, imagine a job where a significant portion of time is spent waiting on a computer (rendering animations, code compiling, etc).

Two contractors are hired, one with a modern laptop and the other with a 10 year old machine. The older machine takes at least twice as long to process the work.

Is it A) ethical to bill for time spent waiting for the machine to process and B) ethical to use the older machine? Assume the contractor using the older machine is using the best equipment currently available to them.

35bge57dtjku 1 day ago 0 replies      
Ethical or not, I'd be more concerned about what I'd do if the current job ended, regardless of why it ended. What do you tell the company you're interviewing with about what you did at your last job? And I don't mean discussing this with them, I mean what do you say about the projects you worked on over the past n years when there's only this one automation?
anovikov 1 day ago 0 replies      
Of course don't tell them. It is always stupid to leave the money on the table. You win nothing by telling in any case. They will fire you, they will own your app because after all you are a programmer and you coded it while on the job so it belongs to them anyway, and others in that company will hate you because you hacked through shit they had to do manually for years. And they will even think that you have scammed them after all that, anyway.
bhgraham 1 day ago 0 replies      
I counter with another question. Is it ethical for a company to pay you the same to do more work? If you tell them you have automated your job, I guarantee that the reaction will be to give you more work. You will not get more free time or more pay.
kyberias 1 day ago 1 reply      
Wow. How can anyone be confused about this? This is clear cut. The stackexchange answers are correct and this HN thread is filled with really unethical, almost childish advice. What the guy is doing is basically fraud. The employer expects him to do efficient work. If he can automate it, that automation is owned by his employer.
fisknils 1 day ago 0 replies      
No. You're performing the job you are paid to do.You could hint that you could handle more work if you think it would benifit you, but how you perform your job, as long as the end result is the same, is not something you want to "bore your busy employer" with.

Especially not if it means saying "You could do this without me now"

jarym 1 day ago 1 reply      
Seriously... Op could quit his job, then go tell his employer he'll maintain the system for a fixed monthly subscription.

They'll have less headcount and op will be free to pursue other activities.

richpimp 23 hours ago 0 replies      
Does it really matter if it's ethical or not? I mean, we're not talking about the ethics of, say, killing a tyrannical despot or allowing a terminal sufferer to commit assisted suicide. If it were me, I'd keep on collecting that paycheck while picking up a second job and double my pay. Again, is it ethical? No. But who cares? This person's mom is right, he has a free lottery ticket. Keep cashing it in. You can keep your ethics while I laugh my way to the bank. Just don't get caught :)
venture_lol 1 day ago 0 replies      
If you are paid according to time and material, you could be sailing in bad legal waters. Ask an attorney about fraudulent billing.

If you are paid like an FTE, As long as your employer is satisfied with your level of productivity, then it really does not matter how long it took you to produce results.

Nevertheless, it's shady to insert bugs into your products. My work is my pride is what matters in the end.

hysan 1 day ago 0 replies      
I know this is an ethical question, but I wonder, would it be possible to have him license the script to the company for some annual fee and then offer the company a support contract as well in case new quirks are found that need to be updated? Combined license + support contract cost == his current salary. Or does the automated script he created already belong to the company?
suls 1 day ago 0 replies      
There was a very similar story a while ago on HN:"Kid Automates Work, Is Fired, Hired Back, Automates Business "https://news.ycombinator.com/item?id=4167186

History repeating itself?

aey 1 day ago 0 replies      
Quit! Start your own company that sells your job as a service.

If you do it right your current boss can be your first customer.

danthemanvsqz 1 day ago 0 replies      
I think the only unethical thing is holding back on the results. It's great to automate tasks and the company doesn't need to know about it unless it is explicitly stated. But your obligation is to deliver high quality results as fast as you can.
nsxwolf 1 day ago 0 replies      
I can't help but think the ironic day will come where someone in the organization will get the idea to automate his job and bring someone in to do it, and that's how he'll finally be let go.
omginternets 1 day ago 0 replies      
Whether you answer "yes" or "no" to this question basically amounts to whether you're an entrepreneur or an employee at heart.

I'm only half kidding.

punnerud 1 day ago 0 replies      
This is why I like the new innovation-project type in EU where you are both allowed to work for the company and sell a project to them at the same time.The problem here is how you could sell them the system, if you are allowed in US?
erkkie 1 day ago 0 replies      
Interesting question here to me is how to align incentives in a manner that works out best for both, the employer and the employee. Automate your job and then move on to higher level problems, rinse and repeat. Profit share?
m83m 14 hours ago 0 replies      
The best solution is to become a contractor with this employer, and charge a flat rate per result or per week/month.
logfromblammo 1 day ago 0 replies      
If you can trust the company to act in an ethical manner rather than a purely profit-seeking manner, there should be no problem in telling them you have automated your own job out of existence.

They pat you on the back, license the software from you for 0.5x your former salary every year, move the folks that formerly did that same work to other projects, and put you on retainer to update the program if it ever needs it. Then they offer you different work, to see if you can work more magic.

That said, I would only trust one of the companies that I have ever worked with to do that. The rest would screw me over good and hard, giving one excuse or another.

By the Hillel principle ("If I am not for myself, who will be for me? But if I am only for myself, who am I? If not now, when?") you have to consider the impact on yourself as well as upon others. Will the company fire me? Will it keep me and fire my co-workers, since I can do all of their work for a week in a single day? Will it pay me more to do so? Do I have a duty to act in the company's best interest if that conflicts with my own? What if it is best for myself and the company, but ruinous for innocent bystanders?

Clearly, if this is a typical US company, the ethical course of action is to not inform the employer. This is an unfortunate loss for the economy as a whole, but it is the only appropriate response to the modal behavior of business management. Maybe also file a patent on the method of automation, if able.

methodin 1 day ago 0 replies      
Willfully putting bugs in code is ridiculous - that alone would be grounds for firing. The OPs concern about ethics has made his actions unethical that would otherwise not be.
TheBaku 1 day ago 0 replies      
I wonder, if OP told the company about his script and the company demands the script is he forced to give it to them?
aj7 1 day ago 0 replies      
Congratulations. You are supporting yourself on a monopoly rent. Don't be a fool and give it away for nothing. You've already said too much.
Vektorweg 1 day ago 0 replies      
Having a employer - employee relationship is already considered unethical by socialists.
myrloc 1 day ago 0 replies      
My question is - will he ever tell the employer about the program? Even after the point of employment.
alkoumpa 1 day ago 0 replies      
similarly, on a larger scale, one could ask whether deep learning is unethical for automating millions of jobs (if not yet, certainly in the future).
adamzerner 1 day ago 0 replies      
There's no universally accepted "right" answer to questions of ethics. See https://en.wikipedia.org/wiki/Normative_ethics#Normative_eth... for some approaches.

I'll provide a few perspectives.

Act consequentialist ("hardcore"): Is the world as a whole better or worse off after you take that action? Probably better off. By taking that action, there'll be less money in your companies pockets. That money may trickle down a bit to Average Joes, but probably will go mainly to rich people who don't need it. On the other hand, you'll have more free time, you'll be happier, and your child will get to spend more time with your son.

Rule consequentialist: Evaluating the costs and benefits of this particular action is error prone, so you're better off just following a good rule of thumb. In this case, I think a good rule of thumb is to oblige by your contract. Your contract as a full time salaried employee is to, basically, give us your time for 40 hours a week and work reasonably hard. If your contract was some sort of fixed price freelance gig, then things would be different, but by signing the contract you did, you gave them your word that you would work reasonably hard for 40 hours a week, and keeping your word is a good rule of thumb.

Rule consequentialist: Evaluating the costs and benefits of this particular action is error prone, so you're better off just following a good rule of thumb. In this case, I think a good rule of thumb is to be honest, and tell your boss.

Deontologist: You have a _duty_ to follow your contract. You should do it _because it's your duty_, not because you think it'll lead to good consequences.

Deontologist: You have a _duty_ to be honest.

Deontologist: You have a _duty_ to be the best possible father you can be, no matter what it takes.

Virtue ethicist: You should follow your contract, because doing so is sticking to your word, and sticking to your word is virtuous. You shouldn't be sticking to your word because you think following that rule-of-thumb will lead to good consequences, you should be doing it simply because it's virtuous.

Virtue ethicist: You should do what is best for your son, because being a good father is virtuous.

Personally, I believe in consequentialism, and I believe that you can use your judgement to decide whether or not to use act or rule consequentialism, based on whether you think you have a decent grasp of the trade offs. If you don't have a good grasp of the trade offs, you can expect a rule-of-thumb to do a better job than your attempted analysis, and should go with the rule-of-thumb. Otherwise, go with the results of your analysis.

In this situation, it seems to me that the trade offs are relatively clear, and that you could go ahead and keep it to yourself. But I wouldn't blame someone for taking the position that the trade offs aren't actually too clear, and it'd be better to fall back on a "be honest" rule-of-thumb.

Note: I expect that if you told them, they would take the program, and either a) use it and fire you, or b) maybe keep you around as a contracter or something to add to the program. You wrote the program during work hours, on a work computer, presumably. So legally, it is there intellectual property. Assuming you don't have some atypical clause in your contract.

ryanmarsh 1 day ago 0 replies      
In business this is called innovation.

If a business found such an internal optimization would it tell its customers what a killing its making or keep the profit and grow the business?

Telling the boss is peasant thinking.

ryanmarsh 1 day ago 0 replies      
Companies treat us so unethically why are we so gracious to them?
s73ver 1 day ago 0 replies      
Considering that they're just as likely to fire you as they are to promote you, I would say it's perfectly ethical to not tell them.
Para2016 1 day ago 0 replies      
I wouldn't tell the employer. I'm guessing they don't care about you and they will take your idea, use it, maybe get rid of you without any compensation for creating the automation.

The job you've been hired for is being completed by a tool you made, and you're getting paid. Maybe look for something more appropriate to your skill set like another post suggested.

Oh and if you're feeling guilty you can read this story about Alcatel stealing IP and forcing a guy to work like a slave.


flukus 1 day ago 0 replies      
It feels as though everyone is focusing too much on the specifics and not considering that there might be a bigger picture. If this person has a spouse to support, kids to feed and cloth and a mortgage to pay then the also have an ethical responsibility to not risk their income by coming clean. Even if it's just themselves there is an ethical responsibility to provide for themselves.

I think it's probably unethical behavior, but probably for entirely ethical reasons.

mythrwy 1 day ago 0 replies      
Is don't believe this is an "ethics" problem. No reason for existential angst.

This is a practical problem. What do you want from the company long term? How do you want to spend your days?

Consider the longer term. What happens if they find out you automated something and didn't tell but rather milked it? Do you even care about what their reaction might be? Is this company at all important in your future? (there aren't "right" and "wrong" answers to this, depends on what your goals are).

How do you see this ending? How can you make maximum advantage out of the situation while preserving what you want out of the company (including possibly continued employment).

As far as I'm aware Moses didn't say anything about these types of situations so you are on your own. But don't be a short term thinker.

thinkfurther 1 day ago 0 replies      
note: before posting I realized logfromblammo said what I'm trying to say and more much shorter and better: https://news.ycombinator.com/item?id=14657981 but now that I already rambled so much I don't want to just throw it away either, so here goes nothing.

> Is this the kind of example you want to set for your son?

Yes. I can nearly touch the very smart and decent person behind that post (which I didn't fully read because you bolded this and I had to get my opinion out before reading on :P)

Use a lot of your time on that son, and some of it on helping people here and there who don't have much time. Spend little money and lots of time! You can answer your son's questions, you can play with him.. don't sacrifice that luxury light-heartedly. Don't spend that penny without turning it over lots, it's the first of that nature you got, and many people don't even know a person who had one.

Of course, as others said, also learn interesting things and keep your eyes peeled for a job that would have meaning to you you can be 100% straight about to everyone involved. But I assume you're already doing that anyway.

This stroke of luck might not last forever, but it is a stroke of luck IMHO, from the sound and content of your post I'd say a nice thing happened to a nice person who put in the work to deserve it. Nothing unethical I can see about it. If they want it automated, they can hire a programmer. Wanting to have it automated by someone for data-entry wages, now that's unethical. So if you want ethics, calculate a generously low programmer salary for 6 months, then coast along some more until they paid you this much.

One thing I'm sure, suffering 40 hours a week when there is no need is kind of the worst example you could set for your son. IMO, of course. His father at least for a moment is free from bondage, but also free from delusions that often come with "aristocracy" (for lack of a better word, I just mean most people who "live the easy life" pay with it dearly in ways they don't even register). That's as rare as it is beautiful. Take the advice of anyone who never tasted this with a grain of salt. Especially if you use free time to seek out things you can do or create that are interesting to you -- I don't believe in relaxation or entertainment that much, I love being focused and busy, but I believe in autonomy and voluntary cooperation.

Everybody should... well, okay, 2 hours a month wouldn't be enough by a long shot, but I do believe life work life and starvation levels for all people on Earth could and should be compatible with a dignified, strong personality. But we're really programmed to not even want that, to not even recognize that as the minimum responsible adults should settle for, but rather belittle it as utopian. Yeah it's a hard problem, but it doesn't get easier by working on unrelated gimmicks instead.

As you said yourself, the company already gets the end result out of you what it wanted out of you for that money. Now they get the bonus of you improving yourself and the world and spending more time with your son than you otherwise could. At least on a human level, anyone who doesn't see this as an added bonus to be happy about is petty. This makes the world much better than you saving the company a job would, which often is just pissing down the drain. You didn't get this job with the intent of automating it, and you probably started trying that without even knowing if it would work, because you like coding. And then you knew that they wouldn't just say "good on you, enjoy the time with your son". I know I'm trying a bit hard here, but if you squint you might say you have to "lie" to get them to "do the right thing".

> You cannot strengthen one by weakening another; and you cannot add to the stature of a dwarf by cutting off the leg of a giant.

-- Benjamin Franklin Fairless

This is true. And yet, if you would let them, they would do it. To be fair, I know none of the people involved, but for a general "they" this is too often true. And nothing would be gained, only something would be lost, and you would have lost the most.

I say you got lucky, it's yours. Use a lot of it selflessly, but use it! Maybe ask a lawyer for advice, don't be reckless of course. But if your only danger to this is your conscience being infected with the general pathology of society, rectify that. Fuck survivor guilt, you know? Good for everyone who gets as far away from the prison system (in the sense of System of a Down) as they can. Don't leave us in the ditch, but never get dragged back in either.

ceejayoz 1 day ago 5 replies      
They haven't just automated the job - they've deliberately inserted errors to make it look like a human made them. That's a big step over the line into fraudulent behavior.
Markoff 1 day ago 0 replies      
it's simple - how long do you plan to stay with this company? 3 years? so ask 3 years salary for your program or better even more for additional support and updates

if they don't seem interested in this just keep doing what you are doing

it's same like comparing performance of employees, some smarter of us learn workarounds to make our work more efficient, is it mandatory for us to share our findings? what would be my motivation to share unless i will get significant bonus or share on company profit gained through higher productivity?

now if you just started and you are in your 20s i can see how you still have naive idealistic attitude and let yourself abuse and help company to fire people including you, if your are older you are less prone to this bullshit

vacri 1 day ago 0 replies      
If the author is an employee, it's pretty clearly unethical to withhold information from the company. The real question is not whether or not it's unethical, but whether the author is okay with behaving unethically.
crawfordcomeaux 1 day ago 0 replies      
I'd argue everything they're doing could be portrayed as ethical in some context.

If they aren't actively looking to replace the job they feel the need to fraudulently accomplish, I'd argue that's the unethical component. I don't think they mentioned anything about looking for more work.

It's one thing to be in a situation where the only options you can perceive as valid are fraudulent ones. It's another thing to choose to stay in it instead of choosing to extract yourself.

_RPM 1 day ago 0 replies      
Workplace.stackexchange.com makes me cringe. It seems every post is written by socially challenged people with absolutely no social awareness or confidence. I had to stop subscribing to it.
Another Ransomware Outbreak Is Going Global forbes.com
500 points by smn1234  3 days ago   408 comments top 43
Animats 2 days ago 8 replies      
Maersk is down. Their main site says:

 Maersk IT systems are down We can confirm that Maersk IT systems are down across multiple sites and business units due to a cyber attack. We continue to assess the situation. The safety of our employees, our operations and customer's business is our top priority. We will update when we have more information.[1]
Maersk is the largest shipping company in the world. 600 ships, with ship space for 3.8 million TEU of containers. (The usual 40-foot container counts as two TEUs.) If this outage lasts more than a few hours, port operations worldwide will be disrupted.

[1] http://www.maersk.com/en

willstrafach 2 days ago 7 replies      
FYI to Sysadmins: Paying the ransom at this point will be a waste of money, as the contact e-mail address has been blocked.

https://posteo.de/blog/info-zur-ransomware-petrwrappetya-bet... (German)

https://posteo.de/en/blog/info-on-the-petrwrappetya-ransomwa... (English)

jannes 2 days ago 7 replies      
This is even more proof how powerful a 0-day in the wrong hands can be.

All of the affected companies' should be considered compromised by the NSA.

Actually, every single Windows PC with an internet connection that has been used before March 14 should be considered irrevocably compromised. Ransomware is much more visible than spyware. Think about all the spyware-infected PCs/networks that nobody knows about.

elcapitan 2 days ago 3 replies      
secfirstmd 2 days ago 1 reply      
(Sorry for the repost but I feel the pain of sysadmins so it might be useful to some people as everything melts down around them this evening)...

Hey, FWIW we had to do some response for ransomware cases recently.

There was a lack of decent stuff out there for how IT teams should deal with it. So we contributed to putting together this quick checklist:


Would be great if more people wanted to add to it.

bkor 2 days ago 2 replies      
The Netherlands and various other countries have created laws where either their version of the NSA and/or police can hoard 0days to be used for hacking.

This massive outbreak is so widespread that at this stage it appears that it either was a very recent 0day or something which only recently was fixed by a patch.

Instead of having loads of countries hoarding security problems I highly encourage a focus on security instead. Seems much better for the economy overall.

110011 2 days ago 4 replies      
Can someone provide a simple (but not overly so) explanation of how the current generation of ransomware operate i.e., A) spread and B) lock up the computer? Does it always require human intervention for A. ? Thank you.
maddyboo 2 days ago 4 replies      
Does anyone know if any tools exist on Linux which can be used for early detection of ransomeware?

Something that monitors file access, disk activity, etc. for suspicious behavior and can trigger some action or alert?

I think I remember some discussion about using a 'canary file' - some innocent looking file with known contents which should never be modified. If a modification is detected, you know something fishy is going on.

nlte 3 days ago 2 replies      
This isn't yet the cyberattack "the world isn't ready for" (https://www.nytimes.com/2017/06/22/technology/ransomware-att...), is it?
mbaha 2 days ago 5 replies      
A friend sent me the bitcoin address, they've already collected 2600$.

[EDIT] Now 3230$

Source: https://blockchain.info/address/1Mz7153HMuxXTuR2R1t78mGSdzaA...

vldx 2 days ago 1 reply      
Interestingly, WPP mandates all it's employees to shut down their computers irrespective of the OS.

> As a precaution, WPP is mandating that everyone immediately shut down all computers, both Macs and PCs. This applies to you whether you are in the office or elsewhere. Working on an office computer remotely is not an option. Please leave your computers turned off until you hear from us again.

> Many thanks for your co-operation and patience.

> Best regards,

dz0ny 2 days ago 0 replies      
Public analysis is tracked here https://otx.alienvault.com/pulse/59525e7a95270e240c055ead/

Seems that payload servers are in Germany, France, and Malaysia.


mihaifm 3 days ago 3 replies      
> with WannaCry it was alleged a nation state was likely responsible for spreading the malware: North Korea

Is there any evidence for this? Looks like another fake rumor.

the_cat_kittles 2 days ago 6 replies      
i said this before and it was met with mostly hostility, but im still wondering... bitcoin has enabled ransomware, so its a boon to crooks. what has it done for non-crooks? i dont mean conceptually (no fed! decentralized! etc. etc.), i mean since its come into being, what has it done for you personally? for me: i bought a vpn subscription, anonymously. probably not able to do that as easily without btc. but, i would personally trade that for not having ransomware attacks. thoughts?
voidmain0001 2 days ago 1 reply      
Kaspersky wrote about Petya 16 months ago. https://blog.kaspersky.com/petya-ransomware/11715/ Has the delivery changed causing it to resurface again?
memracom 2 days ago 0 replies      
Note that having a good multi-generational backup system in place for all machines, servers and laptops, would render this kind of ransomware harmless.

But the state of IT has deteriorated so badly these days because management doesn't care any more. After all why care when you can just take your severance pay and get an increase in salary and more responsibility at another company. Rinse and repeat.

It used to be that the primary job of system admins was to keep the data safe from loss. That was more important than keeping the systems running. How did we lose this?

tudorconstantin 2 days ago 4 replies      
Maybe this is the year of Linux on desktop.
HIBC2017 2 days ago 0 replies      
If you're infected, don't pay the ransom. The email address that's used has been blocked by the email provider.


hackrack 2 days ago 1 reply      
Idea: What if the purpose of these WannaCry style ransomware attacks isn't to get people to pay in Bitcoin, but to drive up the price of Bitcoin?
nuclx 2 days ago 3 replies      
As someone affected by the ransomware - did anyone else notice empty console windows popping up from time to time the days before the ransomware triggered the encryption?
kuon 2 days ago 0 replies      
Those attacks are still "gentle" as if you have (and you should) a read only backup you can resolve it with near 0 dataloss.

What I fear are cancer like virus, not wiping or encrypting data at time T, but introducing subtle errors on a longer period. You would be contacted by hackers saying your last 6 months of data contain error. That's scary.

tonyplee 2 days ago 1 reply      
Wonder if they manage to disable the UK's Trident Nuclear Submarine this time.

"Windows for Warship"https://en.wikipedia.org/wiki/Submarine_Command_System

"Want to Nuke someone, please send Bitcoin to unlock the systems."


mighty_warrior 2 days ago 1 reply      
Your own fault if you didnt patch out eternalblue. No sympathy for hacked orgs.
jz10 2 days ago 2 replies      
My friend's work laptop is a victim of this same attack... all the way here in the Philippines.

There was a company wide email blast to disconnect all workstations from the internet at once.

Fascinating development

r721 2 days ago 0 replies      
>Cyberattack hits entire Heritage Valley Health System, shuts down computers

>A cyberattack is affecting the Beaver and Sewickley hospitals and all other care facilities in the Heritage Valley Health System on Tuesday.


jl6 2 days ago 1 reply      
Could someone write a whitehat worm or virus to get into all those vulnerable Windows systems and close the door behind them by patching the hole?
andruby 2 days ago 0 replies      
I always replay the end-game of Uplink [0] in my head when I read news like this.

Great game with great music.

[0] https://www.introversion.co.uk/uplink/

runeks 2 days ago 0 replies      
> Ukraine's government, National Bank [..]

Now there's a new attack target: the central bank. Send 100 BTC to this address and I will decrypt the balances stored by your central bank, so you, again, know how much money you own.

kator 2 days ago 1 reply      
emersonrsantos 2 days ago 0 replies      
Does anyone know if this decrypt app still work?


gmisra 2 days ago 1 reply      
Is anyone aware of an entity that attempts to objectively quantify the economic impact of an event like this (ransoms paid, data lost, labor hours lost, new security costs, etc)?
zvrba 2 days ago 0 replies      
So.. ransomware authors want payments in Bitcoins. The obvious counter-attack from the governments would be to target and shut-down all services exchanging bitcoins (or other digital money) to real money. Heck, they can hack them and delete all data, so they shut down on their own.
strictnein 2 days ago 2 replies      
On a related note, I don't understand the reason behind transactions like this:


Is there something special about using numerous senders like that?

r721 2 days ago 0 replies      
>We have confirmed U.S. cases of Petya ransomware outbreak


faragon 2 days ago 1 reply      
Why Ransomware authors are not yet in jail? /cc FBI CIA BND MI5 FSB
paulpauper 3 days ago 3 replies      
store important stuff on external hard drives

never download suspicious stuff specially from emails

kronos29296 2 days ago 0 replies      
I remember reading something about a guy warning about intrusions on his company during Wannacry to steal company data and install malware. Now we have this. This is giving me goosebumps.
agumonkey 2 days ago 1 reply      
How can one check quickly if his OS is vulnerable ? I know MS pushed updates, but sometimes updates are stuck, or fail to install or are delayed by the user .. so
athenot 2 days ago 2 replies      
Do these attacks affect anything else beyond Windows?
jdc0589 2 days ago 0 replies      
FYI, looks like this is still using EternalBlue.
butz 2 days ago 0 replies      
One of my clients got some strange emails around 12:00 GMT with links to probably infected websites. Is this related to ransomware?
agumonkey 2 days ago 3 replies      
Did these ships are also oil tanker ?
dagaci 2 days ago 2 replies      
I'm afraid that this attack demonstrates that the old PC architecture: Side-loading any app, userspace, privilege escalation, low level file sharing functionality just isn't for purpose.

If malware can exploit a 0-day, 100-day, 1000-day security hole in a corporate network of 2000 machines, its too easy for that malware to share itself across the network and send emails attachments to AllUsers (every single company I've worked for still allow Everyone to send anything to Everyone).

Microsoft's next XP patch should be to remove SMB functionality or just outright disable it (and probably remove IE and other nonsense installed by default too).

And when Windows 7 expires the final patch should be a severe lock down too..

TDD did not live up to expectations microsoft.com
519 points by kiyanwang  1 day ago   397 comments top 94
nostrademons 23 hours ago 15 replies      
TDD failed for economic reasons, not engineering ones.

If you look at who were the early TDD proponents, virtually all of them were consultants who were called in to fix failing enterprise projects. When you're in this situation, the requirements are known. You have a single client, so you can largely do what the contract says you'll deliver and expect to get paid, and the previous failing team has already unearthed many of the "hidden" requirements that management didn't consider. So you've got a solid spec, which you can translate into tests, which you can use to write loosely-coupled, testable code.

This is not how most of the money is made in the software industry.

Software, as an industry, generally profits the most when it can identify an existing need that is currently solved without computers, and then make it 10x+ more efficient by applying computers. In this situation, the software doesn't need to be bug-free, it doesn't need to do everything, it just needs to work better than a human can. The requirements are usually ambiguous: you're sacrificing some portion of the capability of a human in exchange for making the important part orders of magnitude cheaper, and it's crucial to find out what the important part is and what you can sacrifice. And time-to-market is critical: you might get a million-times speedup over a human doing the job, but the next company that comes along will be lucky to get 50% on you, so they face much more of an adoption battle.

Under these conditions, TDD just slows you down. You don't even know what the requirements are, and a large portion of why you're building the product is to find out what they are. Slow down the initial MVP by a factor of 2 and somebody will beat you to it.

And so economically, the only companies to survive are those that have built a steaming hunk of shit, and that's why consultants like the inventors of TDD have a business model. They can make some money cleaning up the messes in certain business sectors where reliability is important, but most companies would rather keep their steaming piles of shit and hire developers to maintain them.

Interestingly, if you read Carlota Perez, she posits that the adoption of any new core technology is divided into two phases: the "installation" phase, where the technology spreads rapidly throughout society and replaces existing means of production, and the "deployment" phase, where the technology has already been adapted by everyone and the focus is on making it maximally useful for customers, with a war or financial crisis in-between. In the installation phase, Worse is Better [1] rules, time-to-market is crucial, financial capital dominates production capital, and successive waves of new businesses are overcome by startups. In the deployment phase, regulations are adopted, labor organizes, production capital reigns over financial capital, safety standards win over time-to-market, and few new businesses can enter the market. It's very likely that when software enters the deployment phase, we'll see a lot more interest in "forgotten" practices like security, TDD, provably-correct software, and basically anything that increases reliability & security at the expense of time to market.

[1] https://dreamsongs.com/RiseOfWorseIsBetter.html

alkonaut 1 minute ago 0 replies      
His argument is basically that with tight coupling, TDD is too hard and time consuming to pay off.

But part of the point of TDD is ensure that all code is testable, and testable means loosely coupled.

So you can't start TDD'ing on a bad and tightly coupled legacy codebase. You can do it on a greenfield project however. Greenfield is very much the "lab environment" he talks about. You control everything.

With greenfield projects comes another reality though: you often have to explore and sketch a lot. TDD does not work well for writing a dozen sketch solutions to something and throwing out eleven.

And that to me is the main drawback of TDD: it works poorly for very young code bases and it works poorly for very old ones (that weren't loosely coupled to begin with). It's a very narrow window where you can start using TDD in a codebase and that's when the architecture is first set, but the codebase hasn't yet grown too coupled. Such a narrow window means it's not very popular, for good reason.

he0001 4 minutes ago 0 replies      
For me TDD is mainly three things.Firstly its about testing myself so Ive understood the task properly before writing any code at all. In this step I also can tell if my code is doing too much and therefore can tell if my method or function conforms to the single responsibility rule.Secondly its about maintainability and logical reason. A codebase where I dont know what its supposed to do, or forgot about it, I can always rely on the tests to skip the parts thats not interesting making me to move faster.Thirdly its about the ability to refactor and therefore evolving design. Even if this is a step in TDD you will always need to refactor since requirements changes over time. The solution you had is not the optimal solution anymore and therefore you must refactor anyway. Evolving design is a strength where you continuously strengthen your code while getting work done faster and faster since you can offload the reasoning on the tests, covering your back.

AFAIK TDD is actually the only way to produce code in a systematic way. When reading TDD driven code I can make certain assumptions which I cannot do with randomly produced code (theres no system to the code). TDD code is developed automatically with tests in mind and are always a lot easier to test when you need to do it, and you always need to do it. (I would argue that there is possible code which is not testable as is, unless you refactor and then you dont know what the code is doing the same thing as it did before).

If your tests get coupled with the code, Id say its because either your method/function is doing to much or its a language problem, not giving you the necessary tools to ignore implementation details, which mocks usually are an indication of.

Since TDD is a systematic way of producing code (at least more systematic than not doing it), code which isnt produced with TDD will not play well with TDD produced code since it wont follow the same conventions, designs and possibilities.

TDD doesnt automatically make the code more bug free, but I dont believe that TDD cause more bugs just because you use TDD.

If programmers cannot learn or deal with TDD, you have a different problem on your hands.

atonse 23 hours ago 10 replies      
In the earlier days of the ruby community, I feel TDD was seen as gospel. And if you dared say that TDD wasn't the way (which I always felt), you'd feel like you were ostracized (update: just rewording to say, you'd worry that it would hurt you in a job search, not that people were mean to you). So I never spoke up. I feel like I was in the TDD-bad closet.

I absolutely think _tests_ are useful, but have never found any advantages to test-DRIVEN-development (test-first).

But part of that is probably my style of problem solving. I consider it similar to sketching and doodling with code until a solution that "feels right" emerges. TDD severely slows that down, in my experience, with little benefit.

What I've found works really well is independently writing tests afterwards to really test your assumptions.

agentultra 23 hours ago 6 replies      
Maybe the title should be: TDD did not live up to my expectations?

I too, like the author, have been practicing TDD for > 10 years. Test, implement, refactor, test... that's the cycle. If you follow that workflow I've never seen it do anything to a code base other than improve it. If you fail on the refactor step, as the author mentions, you're not getting the full benefit of TDD and may, in fact, be shooting yourself in the foot.

I've read studies that have demonstrated that whether you test first or last doesn't really have a huge impact on productivity.

However it does seem to have an impact on design. Testing first forces you to think about your desired outcomes and design your implementation towards them. If you think clearly about your problem, invariants, and APIs then you will guide yourself towards a decent system.

The only failing I've seen with TDD is that all too often we use it as a specification... and a test suite is woefully incomplete as a specification language. A sound type system, static analysis, or at the very least, property-based testing fill gaps here.

But for me, TDD, is just the state of the art. I've yet to see someone suggest a better process or practice that alleviates their concerns with TDD.

dcherman 23 hours ago 2 replies      
I've also never found TDD to really be very beneficial except for all but the most trivial utility libraries.

Most of the time, I have an idea of where I want to go, but not necessarily exactly what my interface will look like. Writing tests beforehand seems to never work our since nearly always, then will be some requirement or change that I decide to make that'd necessitate re-writing the test anyway, so why write it to begin with?

The extent of my tests beforehand these days (if I write any before code) are generally in the form of (in jasmine.js terms)

it('should behave this particular way', function() { fail();});

Basically serving as a glorified checklist of thoughts I had beforehand, but that's no more beneficial to me than just whiteboarding it or a piece of paper.

That said, all of my projects eventually contain unit tests and if necessary integration tests, I just never try to write them beforehand.

mledu 1 hour ago 0 replies      
If anything I think this article makes a great case for TDD. If your developers aren't good at design and refactoring and that is showing up in your tests, that is an indication that your design needs to be refactored to be less coupled. TDD isn't a panacea, developers have to have some level of sense and see the signs of a less than optimal design. Pain in test creation is a great way of showing that as it simulates client code.

I also don't understand people thinking that you have to write the entire test suite up front. You build your test along with your code. You start simply and build up, this way if you don't have concrete specs your tests are helping you with the design by thinking about consumption as well as implementation.

sametmax 1 hour ago 1 reply      
Another reason for the failure of TDD is the fact you need to be a very good programmer to be productive with it. Indeed, it requires you to be able to think your general API and architecture ahead.

Junior and 9-to-5 programmers suck at this. They are much better at tinkering until it forms a whole, then shape that into something that look decent and works well enough.

And we live in a world where they represent a good part of the work force.

You can't expect everyone to be a passionate dev with 10 years of experience and skilled in code ergonomics and architecture design while being good at expressing him/herself. That's delusional. And armful.

willvarfar 23 hours ago 6 replies      
No article about TDD, particularly one that shouts out to the respected Ron Jeffries http://ronjeffries.com/, is complete without mentioning the TDD Sudoku Fiasco :)

Ravi has a nice summary: http://ravimohan.blogspot.se/2007/04/learning-from-sudoku-so...

Peter Norvig's old-fashioned approach is excellent counterbalance: http://norvig.com/sudoku.html

austenallred 23 hours ago 5 replies      
The problem with TDD is that we flawed humans are writing the tests in the first place. If I suck at writing code there's no reason to believe I wouldn't suck at writing tests to check that code.

I use it on occasion as a good sanity check to make sure I didn't break anything too obvious, but this idea that TDD is a panacea where no bugs ever survive didn't ever make sense to me in the first place.

hdi 30 minutes ago 0 replies      
I like the general assumption that TDD failed.

Failed at what exactly? Who would think 1 methodology would give them all they need to be a great software engineer?

If TDD supposedly failed, can we hear the process that succeeded at what TDD failed? Please extend an olive branch and enlighten the rest of us.

Because I tell yea, I can't even count the amount of "senior software engineers" I've encountered, that deploy untested production code daily to systems that help you guys buy your coffee in the morning, manage your money and pensions. Oh yea, and they all seem to think TDD is bollocks too.

When that percentage decreases and engineers like that become a rare occurrence, then we can talk again. Peace.

cyberpanther 1 hour ago 0 replies      
Sometimes lowering your expectations are a good thing for everyone. Now we know the pros and cons and can use it appropriately. No one particular habit is going to solve all your problems.


chubot 22 hours ago 1 reply      
The tests get in the way. Because my design does not have low coupling, I end up with tests that also do not have low coupling.

Not to be smug, but I feel like this is a rookie mistake I learned 10 years ago immediately after starting TDD.

The slogan I use in my head is that testing calcifies interfaces. Once you have a test against an interface, it's hard to change it. If you find yourself changing tests and code AT THE SAME TIME, e.g. while refactoring, then your tests become less useful, and are just slowing you down.

Instead, you want to test against stable interfaces -- ones you did NOT create. That could be HTTP/WSGI/Rack for web services, or stdin/stdout/argv for command line tools.

Unit test frameworks and in particular mocking frameworks can lead you into this trap. I've never used a mocking library -- they are the worst.

There are pretty straightforward solutions to this problem. If I want to be fancy then I will say I write "bespoke test frameworks", but all this means is: write some simple Python or shell scripts to test your code from a coarse-grained level. Your tests can often be in a different langauge than your code.

The last two posts on my blog are about this:

"How I Use Tests": http://www.oilshell.org/blog/2017/06/22.html

"How I Plan to Use Tests: Transforming OSH": http://www.oilshell.org/blog/2017/06/24.html -- I want to change the LANGUAGE my code is written in, but preserve the tests, and use them as a guide.

And definitely these kinds of tests work better for data manipulation rather than heavily stateful code. But the point is that testing teaches you good design, and good design is to separate your data manipulation from your program state as much as possible. State is hard, and logic is easy (if you have isolated it and tested it.)

Summary: I use TDD, it absolutely works. But I use more coarse-grained tests against STABLE INTERFACES I didn't create.

norswap 23 hours ago 1 reply      
TDD never worked for me, I believe because of the nature of my work: research code, very explorative in nature. I do not now in advance how the interface will turn up, so it's hard to anticipate the interface in my tests (or it leads to a lot of wasted).

Nowadays I mostly test with "redundant random generation testing": generate random but coherent input, run logic, then... Either I can reverse the logic, and I do that and verify that I get back the original input. Or I can't and then I simply write a second implementation (as simple as possible, usually extremely inefficient). This finds bugs that classical unit and integration testing would never find.

sevensor 22 hours ago 0 replies      
TDD has worked well for me exactly once: porting a library from to Python to C. I had a very clear idea of what every function was supposed to do and I could write tests first. Due to the nature of the library I was able to write tests that generated lots of random inputs and check the properties of the outputs. This was a great experience --- it was very easy to change the internals without fear of breaking something. Ordinarily writing C is a bit of white-knuckle experience, but this made it quite pleasant.
tomelders 1 hour ago 0 replies      
Off topic: But who's in charge of typography at Microsoft? The typesetting of their blogs and documentation are horrific. And what little I see of Windows looks just as bad.
romanovcode 5 hours ago 1 reply      
Oh, finally the TDD fad is dying. Never got into it and always thought it is a complete waste of time.

I advocate to write tests only for critical algorithmic calculations and nothing else.

Integration tests matter 100x more (at least in webdev).

dkarl 21 hours ago 1 reply      
When you look at the TDD evangelists, all of them share something: they are all very good probably even great at design and refactoring. They see issues in existing code and they know how to transform the code so it doesnt have those issues, and specifically, they know how to separate concerns and reduce coupling.

I think one of the selling points of TDD, and something I hoped for from TDD, was that causation went the other way, and writing tests would result in code being refactored to separate concerns and reduce coupling. Sadly, I've seen that it is possible to write code that is highly testable but is still a confused mess. What's more, TDD as promoted encourages people to confuse the two, resulting in testability being used as a reliable indicator of good design, which produces poor results because it's much easier to make code testable than it is to make it well-designed.

I've also seen people mangle well-factored but untestable code in the process of writing tests, which can be a tragedy when dealing with a legacy codebase that was written with insufficient testing but is otherwise well-designed. A legacy codebase should always be approached with respect until you learn to inhabit the mental space of the people who wrote it (always an incomplete process yet very important), but TDD encourages people to treat untested code as crap and start randomly hacking it up as if it were impossible to make it worse.

This unfortunate (and lazy) habit of treating testability as identical with good design is not a mistake that good TDD practitioners would make, but I think they did make a mistake in the understanding of their process. My guess is that when those people invested effort into refactoring their code for testability, they were improving the design at the same time, as a side effect of the time and attention invested combined with their natural tendency to recognize and value good design. They misunderstood that process and gave too much credit to the pursuit of testability as naturally leading to better design.

I do think the idea of TDD is not entirely bankrupt, because the value of writing tests is more than just the value of having the tests afterwards, but I think its value is overblown, and people who believe in the magical effect of TDD end up having blind confidence in the quality of their code.

namuol 13 hours ago 1 reply      
The main benefit of TDD:

It strongly encourages you to think of your code as several input/output problems.

When you apply this model of thinking at scale it tends to lead to a much simpler (read: less-complex) codebase.

gkop 23 hours ago 2 replies      
There are at least several other advantages to TDD the article misses:

* Faster development feedback loop by minimizing manual interactions with the system

* The tests are an executable to-do list that guides development, helping you stay focused and reminding you what the next step is

* Provides a record of the experimentation taken to accomplish a goal, which is especially useful when multiple developers collaborate on work-in-progress

zorked 23 hours ago 1 reply      
And another generation of programmers learns that there is No Silver Bullet[1].

This will happen again and again.

[1] http://faculty.salisbury.edu/~xswang/Research/Papers/SERelat...

richardknop 4 hours ago 0 replies      
Would it still be TDD when talking about functional tests? So no mocking. Or does strict TDD definition only include unit tests.

Because the general principle of writing a test case and then writing/editing code still applies with functional / integration tests.

And I have always preferred to use functional tests to test bigger components / packages based on their public interface then to write a unit test every small function inside the package.

Unit tests seem to be much more useful in situations when you know exactly what should your inputs and outputs be, for example if you are writing a function to transform data from one type/object to another. This is where unit testing shines.

But a lot development usually involves integrating / gluing together several higher level components together and passing data between these components and I much prefer functional tests there.

deweller 2 hours ago 1 reply      
I grant the premise that TDD has drawbacks. But are they really worse than the drawbacks of not writing tests?

Code with no test coverage will have more defects and will be more prone to regressions.

For many projects TDD is the best we've got until something comes along that replaces it.

dyarosla 23 hours ago 3 replies      
tl;dr: TDD is not, on its own, effective. If code is highly coupled, tests become highly coupled and a nightmare. The author advocates that learning to properly refactor and create low-coupled code should be a first priority ahead of following TDD blindly.

IMO, nothing really profound here.

exabrial 22 hours ago 0 replies      
TDD takes discipline, planning, and creates stability; I think in the day and age of "rewrite it using the latest framework" it just doesn't coincide with the insatiable thirst developers to use the latest bleeding edge XYZ.
wyldfire 23 hours ago 1 reply      
Sorry for the aside but I find it humorous that the headline that should read "TDD--, Refactoring++" instead shows "TDD, Refactoring++".

This is emblematic of that frustrating AutoFormat behavior that replaces double dashes with an emdash. Probably not a coincidence that this appears on MSDN -- perhaps it was drafted on Outlook or Office or some other tool w/this same AutoFormat.

This feature is responsible for countless miscommunications between colleagues la "I copied and pasted your command just as you had it in the email"...

thewoolleyman 20 hours ago 0 replies      
Having done TDD mostly full-time for well over a decade, I have to agree, and it hits home with some past experiences.

You can end up with fully tested, TDD'd code, that is not well-designed - i.e. unnecessarily coupled, and not cohesive. Cohesion and coupling are the basis of most everything in good software design - e.g. all the letters in SOLID boil down to those two things.

The premise of TDD is that it's supposed to make that too painful to do to a damaging extent. But, if you keep ignoring the pain, perhaps in the name of a "spike" solution, or because you just don't have the experience or background to know what good design is, you will end up with a tested mess of spaghetti.

And that's even harder to untangle and refactor than untested code, because you have to figure out which tests are useful, useless, or missing. That just slows you down as you work towards a better design.

In these situations, scrapping the whole module, including tests, and starting over, is sometimes faster in the long run that trying to refactor incrementally with the safety net of existing tests (another of the main values of TDD).

-- Chad

(edit: typo)

flashdance 18 hours ago 0 replies      
I develop software for radio towers. This was a very confusing headline and article. I only figured out halfway through that I was thinking of the wrong TDD.

Test driven development is one thing. Time division duplexing is very different. I'll have you know that the latter did in fact live up to its expectations!

Showerthought: I wonder if our TDD codebase is TDD?

throw7 1 hour ago 0 replies      
too bad about the RE on the REOI. But TDD really failed because it didn't identify SLG parameters. I don't know if I'd call it a failure though, SLG parameters are usually hard to know before the start of a project and, even, throughout.
fanpuns 20 hours ago 3 replies      
I appreciate that many of you (including the article author) are coming at this question with a lot of experience. I, however, knew very little about coding a year ago and learned with TDD as part of how I build almost every project. Although I think it's always the case that I might "be doing it wrong", it's hard for me to imagine now writing code without first writing tests. Part of this is, admittedly, that I'm still uncertain if my solutions or code will even work and writing tests helps me to both organize what I want to do and also verify that I haven't made silly syntax mistakes.

Was it harder to learn this way? Absolutely (at least I think so, but my sample size is 1). I can't tell you the state of despondency I was sometimes in learning test suites and trying to figure out how to test certain things all while knowing that I could just write the stupid code and inspect the output to see if it was right.

Also, I love to refactor. How do you refactor if you don't have tests to catch you when you break something?

AngeloAnolin 7 hours ago 0 replies      

"developers are quick to pick up the workflow and can create working code and tests during classes/exercises/katas and then failing in the real world."

The anathema of TDD is that people equate having well defined test cases to a solid product that brings the solution that the user wants. I have seen far more software projects boasting of >85% of code coversge for tests, but still failing spectacularly.

TDD failed because it was assumed as the magic wand that aligns the end product with what's on spec and the process that it would cover, but the reality of human behavior being merely coded to test cases is far out from reality.

dccoolgai 23 hours ago 0 replies      
Contract-driven development is the best model for building stable and reusable systems at scale. The flaw in TDD is that it tries to make the tests be the contract instead of supporting the contract.
peterburkimsher 11 hours ago 0 replies      
TDD supports the paradigm of Software Engineering as an Engineering field. Design, plan, build, test.

Chartered Engineers have qualifications for their skills to do this - whether it's building bridges, designing circuits, or making cars.

Most programming is not Engineering. It's scripting. Hacking together a quick solution to meet the user's immediate needs.

Huge businesses (including the company I work for) have some really weak points in their production flow. They're planning factory operations using some shoddy macros in Microsoft Excel thrown together by some businessperson with no programming experience. Management won't change it, because "it works".

Other fields of Engineering (civil, electronic, mechanical) have serious life-threatening consequences if they fail. Software rarely has that risk. (Insert comment about healthcare systems and WannaCry here).

For times when software carries serious risk, then TDD is still important! The rest of the time, it's a burden.

makecheck 10 hours ago 0 replies      
Its important to not assume that tests and code will be written by the same person.

When tests are being created early, its actually a good excuse to have at least a couple of minds looking at the same problem, instead of bottling it up into one person who ends up quitting next month. Its an excuse to not just discuss the approach to use but have some code, where each person may realize that they hadnt really thought about the whole problem or maybe didnt understand it at all.

Other criticisms in this thread are still fair. It is certainly possible to waste a lot of time on tests for instance, and to build something that is too restrictive. Ultimately though, if youre more than a one-person project, some form of sketch it out first is a good thing.

maruhan2 14 hours ago 0 replies      
"The tests get in the way. Because my design does not have low coupling, I end up with tests that also do not have low coupling. This means that if I change the behavior of how class <x> works, I often have to fix tests for other classes.Because I dont have low coupling, I need to use mocks or other tests doubles often. Tests are good to the extent that the tests use the code in precisely the same way the real system uses the code. As soon as I introduce mocks, I now have a test that only works as long as that mock faithfully matches the behavior of the real system. If I have lots of mocks and since I dont have low coupling, I need lots of mocks then Im going to have cases where the behavior does not match. This will either show up as a broken test, or a missed regression."

Simply comment them out temporarily?

"Design on the fly is a learned skill. If you dont have the refactoring skills to drive it, it is possible that the design you reach through TDD is going to be worse than if you spent 15 minutes doing up-front design."

I don't quite understand how TDD means you skip up-front design.

js8 10 hours ago 0 replies      
I think people should in fact test data, not code. Looking at it from purely functional point of view, functions should be either proved that they do what they should, or they should be asserted and QuickCheck-ed. But what really needs to be tested is that the input parameters (i.e. data) conform to some "hidden" assumptions that we had when we wrote the functions. Because as we modify the program, or even why we modify it, is that these assumptions have changed.
borplk 20 hours ago 1 reply      
Unfortunately TDD has become a band-aid for lack of constructs that should be a part of the language in the first place.

A language that allows you to express the spec could be so much more useful.

lucidguppy 17 hours ago 0 replies      
I'm not really convinced by this article or by the comments.

You can do exploratory code and TDD at the same time - you just have to write down what you expect the code to do first.

These criticisms of TDD are very weak because they don't spell out an alternative - every critic's vision of proper testing is different - and will respond "that's not what I meant".

thegigaraptor 18 hours ago 0 replies      
I hope nobody uses this article as ammo against TDD. The benefits are not felt immediately but when time comes for maintenance/updates, I'm working on my second port with a company. The first app had fantastic testing and I was confident in the work I delivered. This second app however was led by a developer who "needed to get things done" and I now have to wrap the v1 app in functional tests to validate that I'm delivering a solid port. If the company had enforced better practices sooner, they would have saved the time I'm spending on retesting the original app. This second iteration is test driven, hopefully the next dev has a better experience.

Also testing helps alleviate QA's workload by ensuring developers have not broken any tests and regressed functionality before we hand off to QA.

If you're hacking on an idea or learning, I can understand not testing, but if someone is paying to deliver code, deliver it with tests, period.

zubairq 2 hours ago 0 replies      
I wish I could upvote this article X10000000... I totally agree!
pif 23 hours ago 1 reply      
A.k.a: no methodology will turn a coding monkey into a great developer. What a surprise :-)
kevwil 21 hours ago 1 reply      
This logic is flawed, and I'm not surprised it's coming from Microsoft. If the expectation is that (repeatedly?) making blind guesses quickly and (optionally) cleaning up the mess later is better than expressing an understanding of the problem domain before writing 'real' code, then yes TDD will not live up to those expectations.
S_A_P 22 hours ago 2 replies      
One thing I've noticed. TDD takes longer. It just does. You can argue that you are racking up less technical debt in the long term but every consulting gig I've been on where TDD was the "directive" often deteriorated because the business does not want to factor in between 40 and 100% extra time to allow proper TDD coding. They want the same somewhat arbitrary and bonus driven deadlines that they always do, and in order to meet them, we usually end up tossing out TDD halfway through and reverting to just having skilled developers get the job done as quickly as possible.

THIS is the economic reality of TDD failing. A manager wanting to reduce quarterly spend so he gets his bonus doesnt care that TDD will cost him less over 5 years, he cares that he can get a project delivered on time and under budget...

colomon 23 hours ago 4 replies      
It always startles me when people assume that one programming technique either works for every type of programming or doesn't work for every type.

Working on Perl 6 compilers, an extensive set of tests was our best friend. It was (probably still is, I haven't had time to help the last few years) utterly routine to write tests first and then write the code to make them work. It was a perfect way of working on it.

On the other hand, one of my personal projects in Perl 6 is abc2ly. It has lots of low level unit tests, great. But almost everything really interesting the program does is really hard to test programmatically. How would I write tests to make sure the sheet music PDF generated has all the correct notes and looks nice? That problem is significantly harder than generating the sheet music in the first place!

velox_io 21 hours ago 0 replies      
TDD is a nice idea, however TDD can add quite a bit of overhead. Upfront overhead, before writing any code. This isn't a problem if it is justified/ needed. Plus the tests can become more cumbersome when code changes, more baggage to carry.

Testing often becomes a KPI, and therefor is commonly gamed. Doing the bare minimum to tick '100% code converge!'. I'm a fan of contracts to test the spec and boundaries(whether human or other application) of software.

TDD requires discipline and experience; You could spend an infinite amount of time writing tests and never deliver, or the other extreme becoming incapacitated fighting bugs.

Our first priority should be crash-free software, THEN start to think about making it bug free.

kgilpin 15 hours ago 0 replies      
Also, when a test relies on mocks it doesn't test the real thing, and doesn't guarantee proper behavior in the real world. I suppose this is obvious from the nature of mocks. And yet, if you can figure out a clean and fast way to test something without mocks, I think you're better off.

Along with the coupling problem mentioned in the article, these are the two reasons why I am writing a lot fewer mocked tests (e.g. Rspec) than before.

crucialfelix 22 hours ago 0 replies      
Some code is perfectly suited to TDD. Other code not so much.

I was just working on something with a bunch of tangly functions that measure and remap data and prepare it for sonification as sound events.

I keep jest running and the workflow is much quicker and more satisfying than any other way of hacking.

gedrap 17 hours ago 0 replies      
Personally, I found that TDD works very well for small modules / classes, etc, when there's little design to be done. In this case, you can focus on writing down the spec (test cases) and be fairly confident that it works by the time all test cases pass. Also, I agree with the author about complexity involved if one decides to go down the TDD way for large, complex systems. So, essentially, it boils down to picking the right tool for the job, and TDD is just a tool, like any others.
paupino_masano 23 hours ago 0 replies      
I think it all depends on the context of what you're writing. For example, we use TDD when making additions and modifications to a tax engine. For this use case it's incredibly useful as the relative inputs and outputs are both predictable as well as repeatable.
luord 12 hours ago 0 replies      
This article made me feel good. Ever since I started doing TDD, I refactor a lot more and my code looks nicer.

Hopefully, I'm not falling into the other trap he mentions and getting into design that would be worse than 15 minutes up-front.

Sadly, I can't comment on anything else as TDD isn't practiced at all in my area.

_Codemonkeyism 22 hours ago 0 replies      
I love unit tests, they give me a good vibe and find some kinds of bugs - mostly b/c I think differently about a problem.

TDD never worked for me, b/c the code goes through many refactorings until I'm happy and it always felt tedious refactoring the TDD tests.

elliotec 22 hours ago 0 replies      
PaulKeeble 19 hours ago 1 reply      
Microsofts own study of TDD showed that it definitely improved defect rates and they went fully into developing with it for Vista which is part of the reason for the delays. Nowadays large chunks of the API is automatically tested and this has allowed Microsoft to release changes must more often with a lot less manually testing.

So while this individual is finding his local team isn't getting the full benefits, Microsoft appears to be based on its own report on the technique and its outwards software release cycle changing.

corpMaverick 21 hours ago 0 replies      
Title should be: "TDD did not live up to MY expectations".

TDD is sort of an art more than a science. You have to know when and how to do it. You also have to know how much as the marginal utility diminishes very fast.

didibus 22 hours ago 0 replies      
I remember an old Microsoft analysis where they measured empirically time to delivery and actual defects and found TDD to not reduce defects while increasing time to delivery. Can't find it anymore.
weberc2 22 hours ago 2 replies      
The main benefit for TDD in my mind is that it mostly makes it painful to write spaghetti code. When I'm reviewing something that looks too integrated, I just ask for a unit test for that particular piece of functionality, and the author is effectively forced to go back and refactor. After going through this a few times, they learn to think about their design before they write code. Of course, many dynamic languages defeat this by offering hacks like Python's mock.patch which let you nominally test spaghetti code...
garganzol 21 hours ago 0 replies      
TDD works exceptionally well. The secret sauce is to find the correct scope for tests. In my experience, integration tests are the most suited kind of tests for successful TDD.
StevePerkins 23 hours ago 2 replies      
Any programming language suitable for business application development is going to have static analysis tools that can reveal your percentage of test coverage.

As long as 95% (or whatever) of your logical branches are covered by tests, I don't really care whether you wrote the tests beforehand or after the fact.

However, TDD being hard is not a justification for not writing the test coverage at some point in the dev cycle. Too many developers, and managers, make that fallacy leap.

Debugreality 21 hours ago 0 replies      
I've seen TDD used once really well in a university setting where it was used only for shared libraries/services that could be used by multiple other teams or departments but not on individual (front facing) projects.

Probably because only the best developers on the team worked on the shared services it eliminated the refactoring issue as well as ensuring shared services could be a lot more reliably and safely updated.

krmboya 22 hours ago 1 reply      
What about a kind of middle ground, doing a 'spike' when figuring out stuff, what kind of thing you should build, what the design should look like, etc then follow up with TDD to stabilize the identified interfaces and produce tests that act as a system health check?

Ok, for consultants, perhaps they end up doing the same kind of things for different clients to the extent that they can just jump in doing things TDD from the very beginning.

perlgeek 22 hours ago 0 replies      
When reading the title, I had hoped for data. Like when Microsoft analyzed their own developer's data to find out if remote work impacted productivity or bug counts.

Instead, just another piece of anecdote. Sure, anecdotes from 15 years, still not what I hoped for.

Doesn't Microsoft have hundreds of dev teams, and can compare things like development speed and bug counts, and correlate with whether those teams practice TDD? I'd read that article immediately!

faragon 18 hours ago 0 replies      
Tests are good for detecting code that is not working as expected, using them as a investment/insurance, based on a budget. However, in my opinion, TDD is often more about an obsesive-compulsive religion built on wishful thinking on programmers reaching the excellence by writing tests ad nauseam.
blackoil 21 hours ago 0 replies      
TDD should not be taken as a religious dogma. The way I like it is, central business logic as pure functions, which have tonnes of unit test. While integration with other services and components sits on edge which do not have unit tests, instead integration tests. If I have a key piece of code, I wanna test, but would require lots of mocks, it is time to refactor.
99_00 20 hours ago 0 replies      
>The tests get in the way. Because my design does not have low coupling, I end up with tests that also do not have low coupling. This means that if I change the behavior of how class <x> works, I often have to fix tests for other classes.

At this point you should be realizing your code is untestable and needs to be refactored.

remotehack 19 hours ago 0 replies      
Software obeys its own dynamics. Some things work well. Some not quite the same. It's nice to see someone admitting that testing is good, rigid TDD; like rigid Agile or rigid Waterfall, are bad. Corollary, what our parents said still holds true; too much of a good thing is bad.
hughperkins 15 hours ago 0 replies      
Since when did TDD fail? Which is not to say it needs to be applied systematically to everything. But there are often bits of code which are better off being correct, and TDD works well for those.
ryanmarsh 23 hours ago 2 replies      
My day job is teaching TDD.

Just like other agile rhetoric I've found the benefits are not what the proponents advertise.

I teach it through pairing and here's what I find.

TDD provides two things.

1. Focus

Focus is something I find most programmers struggle with. When we're starting some work and I ask, "ok what are we doing here" and then say "ok let's start with a test" it is a focusing activity that brings clarity to the cluttered mind of the developer neck deep in complexity. I find my pairing partners write much less code, and much better code (even without good refactoring skills) when they write a test first. Few people naturally poses this kind of focus.

2. "Done"

This one caught me by surprise. My students often tell me they like TDD because when they're done programming they are actually done. They don't need to go and begin writing tests now that the code works. They like the feeling of not having additional chores after the real task is complete.

pcarolan 23 hours ago 1 reply      
Defining the interface before you write the code is the major advantage of test-driven development and what it added to that way of thinking was very valuable especially to novices. It also makes your code more modular and reusable. Writing code as if other people were going to use it is something we don't talk about enough.
henrik_w 22 hours ago 0 replies      
One of the key benefits for me is the mindset (fostered by TDD) to make as much as possible of the code (unit) testable. This naturally leads to less coupled code, because otherwise it is not possible to test it in isolation. So the fact that you start with the aim of unit-testability leads to better designs.
alexandercrohde 23 hours ago 0 replies      
Tl, dr: When we treat unit tests as an ends in itself, this leads to writing clunky tests for clunky code. If an engineer doesn't understand modular, reusable code then that engineer can't won't be able to write code that can tested easily. Thus understanding design is a prerequisite to effective TDD.
jv22222 15 hours ago 0 replies      
I'm not sure if the OP is advocating against using tests and ci completely, or just the process of writing tests first and then code... Any one got any thiughts on that?
hennsen 1 hour ago 0 replies      
Microsoft did not live up to expectations. For 20 years and counting now...
buckbova 22 hours ago 5 replies      
Seems a little arrogant to say most developers don't know how to refactor or do it poorly. Maybe it's true. I really can't say one way or the other because what I see is most devs believe they don't have time to refactor.
baybal2 21 hours ago 1 reply      
As I remember, one Microsoft was one of the original TDD pushers. One person who worked there for over 20 years told me that "the peak TDD" at Microsoft was right around time of Win ME release
AdmiralAsshat 22 hours ago 1 reply      
Stylistic critique of the article: Is it too much to ask that you enumerate your acronym at least once throughout the entire article? The acronym "TDD" appears 16 times throughout the article, and not once do we get "Test-driven development" spelled out.

I get that it's a technical blog, but "TDD" isn't exactly a household name. You can't utter it in the same breath as SSL or RSA and expect people to know what it means without context.

As a test (no pun intended), try reading the article with the premise that you have NO idea what "TDD" stands for. Can you reasonably infer it from the rest of the article?

cphoover 19 hours ago 1 reply      
TDD failed? What no it didn't? When was the last time you depended on the use of an untested library for a major project?
matchagaucho 12 hours ago 0 replies      
Salesforce.com Developers are the highest paid in the IT industry and TDD is hardcoded into their development process. (All code is required to have 75% unit test coverage).

Correlation is not causation... but maybe it is?

dc2 22 hours ago 0 replies      
Writing TDD tests made me a better developer because it pointed out just how coupled my applications were.
jasonkostempski 20 hours ago 0 replies      
Could have figured that out much sooner if someone had written tests for the expectations.
emperorcezar 23 hours ago 0 replies      
Gonna throw the baby out with the bathwater because bad programmers write bad tests.
haskellandchill 22 hours ago 0 replies      
TDD works for me and the rest of Pivotal Labs, maybe 1000 or so people. YMMV
24gttghh 22 hours ago 0 replies      
Test Driven Development? All these acronyms and not once is it spelled out in the article or the discussion! Is it like saying HTTP or DNS to most people? I'd honestly never heard of it...but the concept seems logical from a high level.
kmicklas 12 hours ago 0 replies      
Most developers are bad at refactoring because most of the tools for it are terrible. Even at Google, something as trivial as renaming a function can be a monumental task.
mdpopescu 23 hours ago 1 reply      
TLDR: TDD done badly doesn't work well.
soared 20 hours ago 0 replies      
Yeah I could google it, but its common practice to spell out an acronym the first time you use it.
partycoder 10 hours ago 0 replies      
TDD forces you to design with testing in mind, also known as construction for verification.

If testing gets in the way is because the design doesn't emphasize testing.

Then, if you have high coupling, you have high complexity and a stronger reason to test.

programminggeek 21 hours ago 0 replies      
This is why TDD failed...

People don't like doing it.

So they don't.

The end.

briandear 23 hours ago 1 reply      
I disagree with the assertion that TDD takes more time. TDD takes less time if you factor in the reduction of errors TDD helps prevent.

This article should be renamed, TDD doesnt work if you dont do it right.

This article seems to argue for Big Design Up Front. Ok, if you do that, then why not write the tests for those designs after you make the design then the code you write confirms to the design.

I dont think anyone advocates that writing tests is the same as the design process. Tests are the result of design not the other way around. The gray area is not design up front but how much design up front.

mncharity 12 hours ago 0 replies      
Boston used to have a software craftsmanship meetup. One month, on the train going home, a few of us discussed "how to describe TDD". Someone had a teaching gig coming up. That night, I attempted to distill the views expressed by this couple of experienced TDD folks. Here it is, FWIW.

# What is TDD?

TDD is JIT-development, built on tests.

It's not developing things before you need them. That's too easy to get wrong. It's not built on reviews and approvals. They're too slow and fragile.

## TDD's JIT development with tests is:

1. Live in the present.

Focus effort on what is clearly useful progress now, not speculation.

Don't do planning or development before you need to. Because later, you will better know what is actually needed, if anything. Be restrained but thoughtful in judging how much of what, needs to be done now.

Don't put off integration. Until then, usefulness and spec are only speculative.

2. JIT-spec

Capture each behavior you care about as a test.

Keep them simple, small, and clear. A new spec is a failing test. A passing test means "done with that -- next!".

Don't stuff your mouth. Don't do lumps. Keep it bite-sized.

Don't spec it until you need it, even if you (speculate) you know where you are going later.

Don't worry about the spec having to change later. They usually do. If the speced behavior is clearly useful now to make progress, that's good enough. If it's something you don't really care about long-term, you can remove it later.

3. JIT-implementation

Keep implementation minimal.

Don't create speculative code. Do reactive implementation and refactoring. If you "might need it later", write it later, when you have clearer need and spec, and more tests available.

## About tests.

Programs have a few behaviors you care about, and many more that are implementation details. Test the behaviors you care about.

Tests redistribute development flexibility and speed.

* Behaviors pinned down by tests, are harder to change. Because you have to update the tests too. They're transparent but rigid, and change is slower.

* All other behaviors, become much easier to change. Because everything you care about is tested, implementation changes can be done energetically, without careful cautiousness and fear of accidental hidden breakage. They're opaque but flexible, and change is faster.

Distribute your transparency and flexibility wisely.

Some topics I'm notably unclear on include test refactoring and management.

* when does one delete tests?

* how are lines of development pivoted?

* how are different classes of tests handled? (eg: external spec commitment; less critical spec I still care about; sentinel spec, which I don't mind changing, but I don't want it to happen accidentally/silently; spec that's transient development scaffolding, and should be removed later; and so on)

Opportunities include:

* broader coverage of the strategy, test, and implementation layer activities

* description of how test suites and implementations change longer-term

* specific discussion of cross-cutting issues like risk mitigation

* tighter characterization of core (eg, all tests and code are a burden, and start with a high time discount, so create and retain only those which are clearly and currently useful)

draw_down 19 hours ago 0 replies      
I don't really have solid arguments against it, I just never found it particularly helpful, or ended up with a result that seemed to justify always working this way, or even defaulting to working this way.
dreamdu5t 22 hours ago 1 reply      
TDD: For people who've never used a decent type system.

Writing a Solidity test to add two integers today really drove home the point.

joeblau 22 hours ago 2 replies      
im_down_w_otp 23 hours ago 0 replies      
PBT-centric TDD has worked very well for us.
Take Naps at Work nytimes.com
511 points by aarohmankad  2 days ago   269 comments top 48
rectang 2 days ago 10 replies      
For the last 8 years at work, I napped for 12-40 minutes basically every afternoon. I headed out into the parking lot and slept in the back of my car. (I bought the model I own after testing to make sure the back was nappable.)

My most productive, creative time was the hour or two in the late afternoon following the nap.

All my co-workers knew the deal and so did my boss. Nobody else napped, but my napping was normal and accepted.

The idea of giving that up if I ever go back to a "normal" butts-in-seats company seems stupid and uncivilized.

beilabs 2 days ago 2 replies      
When I worked in China (5000 engineer company) we'd have our lunch in the canteen, I found that many returned to their desk for the last 15 minutes of their lunch break.

Out came the pillows, all lights were dimmed and calming music played through the PA system. No-one spoke, made noise during this time; quick power nap of 15 minutes did the trick for a lot of people. Definitely something I approve of; I've a small mattress in my office just for such occasions.

have_faith 2 days ago 9 replies      
>Sleeping on the job is one of those workplace taboos like leaving your desk for lunch

What kind of company is it a taboo to leave your desk for lunch?

peterburkimsher 2 days ago 1 reply      
Lunchtime naps are common in Taiwan. Someone will turn off the light in the office at the start of the lunch break, and turn it back on at the end.

That change of lighting encourages everyone to take a rest or leave, because staying will disturb the people who want to rest. It also saves a little electricity, I suppose.

freeflight 2 days ago 3 replies      
I'm really envious of people who are actually able to take naps because it just doesn't work for me.By the time I actually "phase out", for which I need at least like 20-30 minutes, most of my lunch break would already be over, resulting in me waking up all groggy and irritated.
daxfohl 2 days ago 10 replies      
Better: eat less lunch. Ever since I've started having no more than a moderate bowl of low-cal (not "diet", just not "vitamix'd pizza") soup, the second half of the day is just as productive as the first. If I eat enough for a food coma, I go home; I'd provide no useful anything for the rest of the day, nap or no, so no sense hiding it by merely being around.
oneeyedpigeon 2 days ago 0 replies      
> Naps at work can make you more productive. Maybe dont be this obvious about it, though.

That image caption sums up the prevailing attitude for me, really: don't be obvious that you're being more productive, especially if it goes against the grain. Better to toe the line and be less productive. There are many examples of the same phenomenon, beyond just napping.

innopreneur 2 days ago 0 replies      
In India, this seems to be common in households. You would find family members taking an hour or so nap after lunch. Even during festivals or occasions, host arranges nap session for guests. Though it is seen as unprofessional in offices. Until few years ago I used to think those people as lazy and unproductive but now perspective seems to be changing.
nstricevic 1 day ago 0 replies      
I nap for 20 minutes every day at work (at my previous company, for 4 years or so and now since I'm working remotely). Here are my steps for taking a nap:

1. Find a good place to nap. Use the same place every day. I used to nap under my desk on a lazy bag at my last job.

2. Quickly find a comfortable position. Quickly fix everything that bothers you (like watch on your wrist or anything else that's making you uncomfortable).

3. Start breathing from your stomach - not your upper torso. Your stomach should raise up and down, not your upper torso.

4. Relax your whole body. In the beginning, start by relaxing one by one region. First your toes. Then your lower leg, then your upper leg. Then the other leg... Until you relax your whole body. It should feel as your mind is separate from your body. Like it could go out of it. Your body should be completely numb. Later, as you progress, you will be able to relax your whole body with a few breaths. As if some force flows from your stomach and removes spasm from your body as you breath out.

5. Start removing thoughts from your brain. As you start thinking about something, just stop. Another thought comes in. Kill it. Just kill thoughts. You can think only about your breathing. Nothing else.

That's it. With these steps, I'm able to feel a sleep in just a few moments. I use that all the time.

Bonus: I have a special position that I "developed" that mitigates office sounds. I nap on my back, slightly turned on left side. I put my left ear on the pillow or a lazy bag. I put my right hand over my right ear and over my head. That way, a pillow isolates my left ear, while my right biceps isolates my right ear from sounds. I found this to be very effective.

Good luck napping.

suneilp 2 days ago 1 reply      
I like meditation better. I want a meditation room at work, complete isolation, with options for sitting and lying down. Deep meditation has become useful for me to relax and recharge. Also, when you get good at it, it helps you solve things faster when you get stuck on a problem.
overcast 1 day ago 0 replies      
I work about 5 minutes from my house. I get to go home, in silence, grab some food, and nap in my own bed. It makes ALL the difference in the world.
lz400 2 days ago 1 reply      
The difficult part for me is avoiding napping during 50% of my conference calls.
ck425 2 days ago 0 replies      
I've regularly taken naps at lunch too and found they make a big difference to afternoon productivity. I've found walking also has a similar effects, especially on grass. I normally do one or the other depending on how well I slept the night before.

How long do people find most effective? I've found either 12 min or 30 min to be best for me. 30 min if I'm particularly tired, 12 min if not. If I go longer than 12 I get foggy for around 30 min after.

spike021 2 days ago 0 replies      
At my job, I always feel like it's frowned upon to take a nap. In most cases we're a "as long as you get your work done, we can be flexible with hours" kind of shop. But the moment I close my eyes even for just a few short moments, someone either taps me or later mentions "man, you look exhausted, are you okay?"

Power naps make me feel more refreshed, which I'd assume people would want out of their coworker/employee late in the work day, but maybe I'm wrong.

blhack 2 days ago 10 replies      
I don't think I could physically fall asleep at work. Are people really this tired while they're at their offices?

It seems like if you're so tired while you're at work that you actually want to take a nap, then maybe there is another underlying problem.

drmanny 2 days ago 3 replies      
What if your boss is about to catch you napping under your desk and you have to call a friend and convince him to call in a bomb threat so you can escape?
flor1s 2 days ago 0 replies      
Here in Japan it's very common to take naps during the day. Personally I don't take naps but I try to take enough rest during the night. Taking naps during the day might actually hurt your ability to sleep in the evening. https://www.verywell.com/30-days-to-better-sleep-go-to-bed-o...
amelius 2 days ago 0 replies      
I'll make sure that article is visible in my browser while I'm taking that nap.
partycoder 2 days ago 1 reply      
In Japan this is called inemuri. However there is etiquette associated to sleeping on the job.

I've heard in some cases there are secret nap meetings where all the attendees agree to take a nap.

comstock 2 days ago 0 replies      
My feeling is that apart from really essential meetings (and those are very few) your schedule should be pretty open and sleep/break/nap when you want (ideally at home).

Then use metrics other than bums on seats to measure performance.

Osiris 2 days ago 0 replies      
My office had two small nap rooms in the break room. A small bench-like bed, a blanket, and a sliding door that makes a dark room. It's bed really to take a 20 minute break to refresh and refocus.
elchief 2 days ago 2 replies      
I wonder how long it takes things like work productivity tips on HN to filter out to the real world?
notadoc 1 day ago 5 replies      
I don't understand napping, never have. Are you really that tired that you must sleep in the middle of the day? Maybe you just need better rest at night, or maybe you need to exercise more or eat better food? Unless you're a toddler then napping feels like compensating for something else.

To each their own, if it works for you that's great.

closed 2 days ago 2 replies      
Before I started drinking coffee, I had a solid nap flow going. Now that coffee is in the picture, I'd probably just lay there for 20 minutes twitching :/.
rdtsc 2 days ago 1 reply      
Don't unless you're sure people won't talk behind your back and management won't get the wrong impression. "Look at so and so snoring while we are working hard getting stuff out of the door".

Rationally you'll explain and show the article from NYT, and they'll agree. But irrationally they'll still form an opinion and stick to it.

aluhut 2 days ago 3 replies      
I'm pretty sure that it wouldn't be allowed where I work. Even though it's an US company, it's being led by Germans and in some offices they will already complain if you eat too long. "Take Naps at Work. Apologize to No One." would be a pretty dangerous statement there.
ltwdm 1 day ago 0 replies      
I think age plays a role too- a few years ago I would have laughed at the idea of someone having to take a break and rest in the middle of an 8 hour period to regain energy. But I can now clearly see how that matters.
menzoic 1 day ago 0 replies      
NYTimes HQ has 3 nap rooms on different floors. Not sure how many people knew about it.
pmarcelino 2 days ago 0 replies      
I've been doing naps at work for the last year, since I started to try polyphasic sleep. Naps play an essential role in my polyphasic strategy. Those 30 minutes I spend sleeping after lunch save me some sleep hours during night. You can get some data points about naps and its importance in this book (which I recommend): https://www.amazon.co.uk/Sleep-Myth-Hours-Power-Recharge/dp/...
ksk 1 day ago 1 reply      
Do the companies who allow naps see it as a perk or a necessity? IOW, if you see naps as beneficial would you be OK with people taking them during a product release, or a service interruption?
rosege 2 days ago 0 replies      
There are definitely days when I've wished my workplace had a nap room. Maybe I've had a busy few days and I just get a bit more tired than normal after lunch. It makes sense to me - if I'm tired at 2pm then I normally have another 3.5 hours until clock off. If I dont nap they wont be very productive. If I could nap I would only lose half an hour then be productive for the other 3+ hours. I would probably stay longer anyway so its no lost time to my employer anyway.
beat 1 day ago 0 replies      
One of the nice things about working from home, for me, is the ability to take the occasional nap when I'm dragging. It takes me about 15 minutes to get what I need... once I hit a half hour, it becomes counterproductive as I'm starting to "sleep" rather than "nap".

Most nights, I do get sufficient sleep, but I need the occasional nap anyway.

peterkshultz 1 day ago 0 replies      
A study by NASA found that the optimal time for a nap was 26 minutes.

The lengthy study can be found [here](http://www.jetlog.com/fileadmin/downloads/NASA_TM_94_108839....).

fixesCrashes 1 day ago 0 replies      
And here I'm, on a holiday, pretending to provide IT support to five employees. The other 90+ employees chose not to work on holiday. I hate society...
baby 2 days ago 0 replies      
I'm all for this, but I think it's also important not to eat too much for lunch and to do sport, you probably won't feel like taking naps during the day as much.
inestyne 1 day ago 0 replies      
This goes back to the days when people drank much more, at work, at lunch, etc...

In a drinking culture, nodding off at your desk is a very very bad thing because it means you can't handle your liquor and this just could not be tolerated. Due to all the drinking, with management and clients, if you had a problem your were a liability. It's insane but it was another time. Not my generation but I've had conversations.

Napping now, with the amount of real work we all do. The stress level we are willing to carry. It's just not anybody's business anymore.

kraftman 2 days ago 0 replies      
Or you could work shorter hours and get a decent nights sleep.
sgspace 1 day ago 0 replies      
Napping is fireable offense at my company. :(
quotemstr 2 days ago 0 replies      
I find nap rooms useful for those days when I've been there all night and need to have a semblance of functionality for meeting the next day.
oridecon 2 days ago 1 reply      
My coworker used to take a nap and fart really loud without noticing. I'm not trying to be funny I just worry I might do the same.
carrja99 1 day ago 0 replies      
I quit energy drinks 3 months ago, taking a nap during lunch has been immensely important.
malkia 2 days ago 0 replies      
Where I work, go/nap is respected. If I see a teammate napping, I would try to be less noisy, and leave him/her to rest.
deskglass 2 days ago 0 replies      
How do you take naps? It takes me an hour or two to fall asleep. I don't drink coffee.
raoulr 2 days ago 0 replies      
Technique around here is to nap in the car. 15 minutes is usually enough to be refreshed.
bakul 1 day ago 0 replies      
I found that I don't feel sleepy after lunch if I work while standing up. But if I sit in a chair I can feel sleepy and a five minute nap makes for a more productive afternoon than fighting the sleepiness. Has anyone else observed this effect of curing after lunch sleepiness by standing up?
SirLJ 1 day ago 0 replies      
This is the greatest benefit of working from home (or second greatest if you have a lot of traffic to fight)... Around lunch, do a quick 30min exercise, protein shake, lay down, read a bit and fall asleep for 30 minute and you'll feel like you have one more productive day in the afternoon...
bitwize 1 day ago 0 replies      
I'll take "fast tracks to a pink slip" for $800, Alex.
cortexio 2 days ago 1 reply      
nytimes.. i'll skip
Analyzing Cryptocurrencies Using PostgreSQL timescale.com
504 points by akulkarni  1 day ago   183 comments top 15
inlineint 1 day ago 4 replies      
A nice analysis, and it shows how SQL makes it easy to quickly explore data.

However, it seems like the plots of the results of the queries were done manually by writing some code to make each plot.

I can't stop but mention that using Apache Zeppelin Notebook [1] with Postgres interpreter for Zeppelin [2] (Spark SQL should provide comparable analytical capabilities as well, but this comment is not about it) it is possible to show graphical representation of query results without writing a single line of code.

[1] https://zeppelin.apache.org/

[2] https://zeppelin.apache.org/docs/latest/interpreter/postgres...

adamnemecek 1 day ago 24 replies      
So what exactly is the current attitude towards cryptocurrencies? E.g. a coin named Diamond went up 200% since yesterday. https://coinmarketcap.com/currencies/diamond/ However it's not obvious what's driving it.

Also, on Saturday, there was an ICO of this coin called TenX (https://www.tenx.tech/) which sold 100,000 ETH (~$30M) worth of TenX in like a minute. Is it all pump and dump?

I can't imagine that anyone has definite answers but I'm interested to hear some opinions.

RobAtticus 1 day ago 0 replies      
This was work done by our intern over the past few days using TimescaleDB. Our team is around to answer any questions!
gthtjtkt 1 day ago 2 replies      
Decent SQL skills for an intern, but I don't really see any "analysis" here. As someone who's been buying and following cryptocurrencies intently for the past few months, I can confidently say there's nothing in this article that even a novice investor would be interested in.

The TL;DR is "Prices went up, prices went down. Some more than others." There's nothing actionable in that.

And a lot of the currencies in your dataset have basically zero volume. AMIS -- your "most profit in a day" -- appears to have a 24h volume < $1,000 and isn't even traded on a single major exchange. Again, that information serves no purpose, and coins like that should probably be excluded as outliers. The fact that they were not only included but highlighted tells me that the author knows nothing about the market they're attempting to analyze.

This looks more like clickbait / advertising, to be honest. "Hey, we need someone to write an article on cryptocurrencies because it's a hot topic. Just give me 2,500 words and a few graphs ASAP. Doesn't matter if it's relevant."

cupcakestand 1 day ago 4 replies      
OT and hijacking the thread:

"TimescaleDB (the OPs product) is a new open source time-series database built up from PostgreSQL."

Do you know good alternatives or which distributed databases are generally well suited for huge volumes of time-series data? Cassandra?

mrb 1 day ago 1 reply      
"Turns out that if you had invested $100 in Bitcoin in July 2010, it would be worth over $5,000,000 today."

Actually if you had invested $100 three months earlier, in April 2010 (when BTC started trading at $0.003 on BitcoinMarket.com) it would be worth $87M today.

leot 1 day ago 2 replies      
Assume near-perfect liquidity among x-coins. Assume, also, that software makes starting a new kind of coin as easy as starting a small business. This implies that the crypto-currency market cap will be divided among far more than "21M".

So, given near-perfect liquidity, what makes any particular coin valuable? For shares it's earnings per share. Will we need something akin to "earnings per coin"?

spreadstreet 1 day ago 0 replies      
If you need additional datasets, head over to https://spreadstreet.io where you can download over 3,000+ datasets across hundreds of digital currencies.

Nice article! Had a good time reading it.

mbonzo 1 day ago 0 replies      
In case anybody here finds this useful, I made a screencast following the blog for setting up and loading the data: https://www.youtube.com/watch?v=RSFC24FMxy4&feature=youtu.be
placeybordeaux 1 day ago 0 replies      
Props for providing the scraped data set as well!
irrational 1 day ago 0 replies      
I remember when bit-currency first came out years ago. Back then it was still possible to generate them on your slow desktop machine without too much work. I considered doing so, but figured they would never go anywhere so why bother. Sigh...
smaili 1 day ago 8 replies      
Slightly off topic, but what are good CoinBase alternatives for buying/selling that cover currencies outside of just Bitcoin, Ethereum and Litecoin?
riston 1 day ago 0 replies      
Well written article and great analysis.
partycoder 1 day ago 0 replies      
EGreg 1 day ago 1 reply      
Where can we get access to this dataset and time series?

I would like to run my own analysis too!

Show HN: GreenPiThumb A Raspberry Pi Gardening Bot mtlynch.io
528 points by mtlynch  3 days ago   179 comments top 50
setq 3 days ago 5 replies      
I love these projects. They are always destined to fail from experience but are great fun anyway. I've tried a few times to do similar things. I'll catalog my disasters quickly:

1. Seedlings need blowing around a bit when they pop out or they get all spindly. Cue wiring up two 80mm PC case fans to a chain of a single astable then two monostable 555's to generate an oscillating wind field that goes on for 5 seconds each direction after a delay of 10 minutes. Dried the compost out, blew most of it it away and then killed the plants dead. No tomatoes for me!

2. Watering robot version 1. Similar to above but with a 74hc390 dividing down the clock so it only ran once every day. Used an unprotected MOSFET to control a small water pump from ebay. Back EMF blew the MOSFET up and jammed it as a short. Emptied the entire water reservoir into the pot, down the wall and into the carpet.

3. Watering robot version 2. Same as above with problems fixed. Apart from I ran out of bipolar 555's so I used CMOS ones which are a little more tetchy about noise. Cue last 555 getting jammed in an on state and the same thing happening. This time, the tupperware box with the electronics ended up getting wet and the wall wart exploded.

Edit: meant to say to the OP - nice work. This is the spirit of all things interesting :)

crusso 3 days ago 1 reply      
But Im a programmer, not a gardener. If I had a plant, Id have to water it and check the plants health a few times per week. I decided it would be much easier if I just spent several hundred hours building a robot to do that for me. If the plant lives to be 80 years old, I come out slightly ahead.

The mark of the start of any good hobby project is a sense of humor about the time it really takes to accomplish something simple with technology on the first go-round.

exelius 2 days ago 3 replies      
So the reason your moisture sensor project failed is because those types of moisture sensors are really designed for "stick it in, test it, and pull it out" testing. If left powered on in a moist environment, the conductive material on the sensor will quickly corrode (quickly as in the span of a few hours, as is seen on the graph).

However, Vegetronix makes an ultrasonic soil moisture sensor that does not have electrodes, and thus does not corrode. It is far more complex and expensive ($40) but it's designed as a moisture sensor for sprinkler systems and as such is engineered to be left in the ground.

Edit: Link to Vegetronix sensor: http://www.vegetronix.com/Products/VH400/ . I have used it and it works well, but as it turns out, even this is not sufficient to really automate a garden. You need fertilizer. Hydroponics make dealing with that complication much easier -- until you realize that the fertilizers are caustic / acidic enough that you have to flush the lines with water as well...

In other words, there's a pretty good reason you can't buy a kit off the shelf that will grow plants :)

Eduardo3rd 3 days ago 2 replies      
I really like the addition of the web camera - it's a nice bonus on top of the more traditional temperature/humidity/light/moisture readings that most people incorporate into these DIY systems. Much better than what I made the first time I built one of these things.[0]

The one thing that made my scratch my head was your approach to measuring moisture. There are several very reliable methods for measuring soil moisture directly (changes in resistance, capacitance, time domain reflectometry, etc) that will give you exactly what you are looking for here.

"Therefore, we felt it was fair to assume that watering based on moisture level is impossible and that GreenPiThumb is doing the best it possibly can, given certain inexorable limits of the physical world."

This just isn't true. The sensor you picked up from Sparkfun should give you decent measurements for a while before degrading gradually depending on your soil chemistry.

[0] I ran a consumer soil moisture IOT company for a few years that was sold to Scotts Miracle Gro in 2016.

danhardman 3 days ago 3 replies      
Did the pot you use have holes in the bottom? It looks like you have it just sat on a desk so I'm assuming it's basically just a bucket?

Your moisture problem could just be that you were relying too much on the water evaporating/being absorbed rather than needing to drain out. Gravel at the bottom of a pot with holes in helps water drain really well. Alternatively, without the gravel you could place the pot on a dish, fill the dish and let the water be absorbed from the bottom up.

2III7 3 days ago 4 replies      
I bought this moisture sensor for a gardening project


Haven't had the chance to try it out in soil yet, but reading the comments it looks promising. It uses capacitance instead of resistance and connects directly to the I2C pins of the Pi. So, easier to setup and should be more reliable.

On the software side I use Grafana https://grafana.com/

Not really made for gardening projects but its monitoring and alerting capabilities are pretty much perfect for this kind of application. Not to mention how easy it was to set it up on the Pi and get all (temperature, moisture, light) the sensor data in.

thedaniel 2 days ago 2 replies      
In the water distribution section the author mentions the other gardening software doesn't mention how they distribute water. There's a longer history of irrigation than there is of embedded software development, so maybe he should have talked to someone that actually does agriculture or googled 'irrigation system parts' and bought one of the thousands of existing drippers for a buck.

Of course the later heading "the gardening wasn't supposed to be hard" seems to imply that he assumes non-coding skills are easy to figure out or obvious, which is a sadly too-common trend in the tech world.

bfu 3 days ago 5 replies      
Instead of measuring soil moisture try:

 - measuring air moisture of small upside down cup on top of the soil - measuring weight of the whole pot
Or don't measure at all and instead use precise amount of water on precise times (RTC module, medical grade piezo-electric pump). My similar project is on hold at 20 or so sketches of various types of water pumps.

grw_ 2 days ago 2 replies      
A raspberry pi is the wrong choice for this project from my perspective, complexity is far too high for a simple project. I looked at building the same thing (minus watering) because it seemed commercial products were way too expensive (Parrot Flower Power is $60!).

The cheapest DIY solution I could think of was ESP8266 ($2), Vreg ($.5), moisture sensor ($.5) and LiPo battery (i have many of these..) but I decided I didn't have time or inclination to write the software.

I continued looking for commercial products, and ordered one of these: https://www.aliexpress.com/item/Chinese-Version-Original-Xia...

Pros- Cheap ($15). Has temperature, light and 'fertility' (capacitance?) sensor.Cons- Logs to phone app (in chinese) via BTLE instead of WiFi.

After a few weeks it seems to be working satisfactorily and I will probably order a few more units.

nrs26 3 days ago 3 replies      
I am so excited to see this! I've been working on a very similar project to monitor my outdoor vegetable garden using a raspberry pi and some ESP8266's. Like you, I'm using this as a project to better learn javascript, angular and django. It's in the very early stages, but I'm really loving the experience so far.

Here's a picture of my setup. http://imgur.com/a/BV188

I have a enclosure (that I recently made waterproof) that sits out in my garden that has the ESP8266 wireless chip in there, which works very similar to an Arduino with built in WiFi. I have it reading data in from a soil humidity / temp sensor, an air humidity sensor, a light sensor, and a air temperature sensor.

That data gets sent back to a simple django webserver that I have running (indoors) off of a raspberry pi. It records all the sensor readings every 10 minutes and registers them to various plots in my garden. And then, if there are any big issues (no light for 2 days, lower than average soil humidity or soil temperature, etc), it texts me.

Eventually I'll connect it to my irrigation system, but I don't trust it enough yet!

I have the exact same problem with soil humidity sensors that you mentioned. I even sprung for some fancy ones (http://bit.ly/2sMNRnD) that claim to be waterproof. I cannot make them read useful information and, once it rains or I water outdoors, the sensors read 99% for the next few days. It's very frustrating and the missing piece to make all of this work.

Like you, this started as a quick, month-long project and now it's become something a lot bigger :)

I think eventually I'd like to build this out to be a vegetable garden planner, so I can plan my vegetable garden at the start of the season, monitor what's happening with them, and automatically trigger my irrigation system if needed.

Anyway - it was great to read this! I'd love to hear how this project evolves and would be happy to share any of my experiences as I've put this together.

P.S. And, it's a long shot, but if you (or anyone is reading this) figures how to accurately measure soil humidity temperature in a waterproof environment, I would be forever grateful!

zfunk 3 days ago 1 reply      
Love this. I went down a similar path last year, using a Raspberry Pi to water an outdoor vegetable patch. In the end I used a package [1] to control a remote controlled plug socket. I also hit similar problems with soil moisture, so went down the route of pulling a weather feed - pump water if it isn't going rain. Easy if you are watering outside!

[1] https://github.com/dmcg/raspberry-strogonanoff

lloydjatkinson 3 days ago 2 replies      
Hmm. A MOSFET for a motor switch might be overkill. Also, there's no flyback diode. So your Pi pretty much will let its blue smoke out when you get a flyback voltage, or at least the MOSFET.
joshribakoff 2 days ago 1 reply      
As for the faulty soil moisture, check out tropf blumat. It's made in Germany (you know the Germans make good stuff). It uses osmosis and gravity to keep the soil moisture consistent, no electricity. I've had great success with roots even growing up out of the soil towards the drippers https://youtu.be/UWPLr0Selh8
jnty 3 days ago 0 replies      
So great to read. This is exactly the kind of project the Raspberry Pi was designed to enable - doing something a bit odd (and arguably pointless!) but having fun and learning loads along the way.
tp3z4u 3 days ago 1 reply      
The plants look waterlogged; the roots need air to breathe.

Drain the water back into the reservoir (use a simple filter to prevent damage to the pump) and just use a schedule for watering.

I used a mechanical timer switch for 15 mins every hour for my hyrdo setup. For soil, such tiny plants, and no lights you would need far less frequency. A general rule of thumb is to give it enough time between waterings to let it get a bit dry.

hammock 3 days ago 0 replies      
>GreenPiThumb: a gardening bot that automatically waters houseplants, but also sometimes kills them.

Quite an advanced bot: as close to a human as you can get!

dbrgn 3 days ago 1 reply      
I also thought about doing a project like that, thanks for the write-up. I somehow can't really believe that it's that hard to measure soil moistures.

Are there any industrial plant moisturing robots? What approaches do these use?

Edit: I met this guy a few months ago at a faire: https://lambdanodes.wordpress.com/ The project doesn't seem to have advanced since then, but maybe he'll get better results with his epsilon node.

ldp01 3 days ago 1 reply      
I'm not much of a gardener but the idea of measuring the moisture level seems ill-conceived... Namely because you are measuring at only a single point in your 3D region of soil, and you don't know what range of moisture should be maintained without experimentation.

Using our guy's own sawtooth model of watering/drainage, it would make more sense to just water at fixed intervals and experiment with the frequency to see if the plant grows.

Still a fun project!

cbanek 2 days ago 0 replies      
As someone who has done a lot of gardening, and automated that gardening, I personally find soil moisture to be a complete boondoggle of something to measure. It doesn't tell you if the plant is taking up the moisture, or if the pH is correct (or you have some kind of nutrient salt lockout). The author says they are having trouble because the water isn't evenly distributed.

While it sounds scary, hydroponics is much easier to automate. Use a substrate like grodan blocks that can't be overwatered, and have it drain back into the reservoir you are pumping from. Then it's just a matter of setting your cycle time appropriately, and watering for x seconds every y hours, and changing the water after a set number of days, and adding new nutrients. By using more water than you need without risk, you can ensure an even level of watering over the entire medium.

It's also much easier if you have a nutrient problem as you can easily flush with a large amount of properly pH'd water to 'reset' your substrate, which is very hard to do with soil.

This doesn't even get into things like potting mix typically has eggs for all sorts of pests, if not pests themselves. If you are lucky and get a clean batch, you are still providing a great environment for pests to live.

While it does cost a bit more to do hydro, it's honestly not that much. If you want to be super cheap you can use pH papers or by a $20 pH pen. A starter nutrient kit from general hydroponics should be under $30.

PS - here's a link to my raspberry pi automated hydro system on hackaday (https://hackaday.io/project/12418-garden-squid)

Dlotan 3 days ago 0 replies      
I have a pretty similar project. But the problem with the mainstream moisture sensors is that they break after some time (or I did not find a good one). For myself I found the expensive solution a flower power from parrot http://global.parrot.com/au/products/flower-power/ a handy solution. It workes with bluetooth and they have some documentation. I did some working for python: https://github.com/Dlotan/flower-master-fab/blob/master/app/... and get good results over a long perioud without too many outliers
INTPenis 2 days ago 1 reply      
As a city dweller who has grown a lot of plants on balconies and in apartments I have also been down that road but I found a much easier solution to the watering. Autopots[1].

With those you can have 6 plants on a single 47l tank and only re-fill it every month. (depends on how thirsty they are)

It's a gods send when I want to go away for a few weeks vacation.

For other house plants that are not connected to a huge water tank I tend to just turn over a 2l plastic bottle into the soil after thoroughly saturating it first with water.

So with this the only real requirement I had for my grow op was monitoring. Because I have hard wood floors and I don't want them to swell up due to a leaking tank.

1. https://autopot.co.uk/products/

ShirsenduK 3 days ago 2 replies      
You may also hack this product by Xiaomi.


There are many wifi enabled switches to enable the water pump. Try out the ESP8266 and/or ESP32 in your next project.

Your mind will blown with the possibilities :D

Also, you may loose the soil and go for hydroponics, that would make this really from the future.

splitbrain 3 days ago 1 reply      
I recently build something similar for my balcony. But instead of a $30 raspberry I used a $9 NodeMCU: https://www.splitbrain.org/blog/2017-06/10-automated_plant_w...

Also just took me a few evenings instead of months.

tylerjaywood 2 days ago 2 replies      
This is awesome. I did something similar at reddit.com/r/takecareofmyplant

I was running through moisture sensors on a weekly basis until I hooked it up to the GPIO on only flipped the power on when I was taking a reading. Now I get readings every 10seconds and haven't had to replace the sensor in over 6 months.

yellowapple 2 days ago 0 replies      
The traditional answer to automated irrigation for indoor plants (and outdoor plants when you need better water efficiency than with a sprinkler system) is typically a drip system (background info: https://en.wikipedia.org/wiki/Drip_irrigation). Interesting that this project didn't seem to go that direction, seeing as how water distribution is kind of a solved problem in agriculture/horticulture (at least in terms of the mechanical aspects; efficiency can still use improvements, and little projects like GreenPiThumb are definitely steps in the right direction).
krylon 2 days ago 0 replies      
I dig the writing style! I find that putting in a joke or two every now and then makes it far easier for me keep up my attention.

Also, this makes me think. I have two Raspberry Pis lying around. One is a glorified video player, the other one is just catching dust right now. I have wanted to do some kind of hardware project with it for a while, but I am kind of lazy and have pretty much no knowledge of electronics.

I have wanted to build a weather station, though, that keeps a long term record of its measurement in some kind of database. A Pi would be well suited for that, so I might get around to it one day after all.

option_greek 3 days ago 1 reply      
I had good results with using a simple timer board that switched the pump on and off at fixed intervals. The total setup (for 5 potted plants) costed around $25 including the timer board. The results were great especially with tomato and okra where we had a hard time collecting the produce :)

Edit: Was using something similar to this board:http://www.ebay.com/itm/Automation-DC-12V-LED-Display-Digita...

I think RPi is a overkill for this project.

inetknght 3 days ago 1 reply      
One of the better laid out Pi project summaries I've seen. Good work, I say.
tanvach 2 days ago 0 replies      
This was exactly the route I wanted to go down until I found out about hydroponics, especially the Kratky's method. It's super easy, does not need any electronics or pumps, and I've successfully grown lettuces and herbs.


rdez6173 2 days ago 1 reply      
I wonder, for a container plant, if you can reliably use weight as a measure of soil saturation. Of course plant growth would add to the weight, but I suspect that could be accounted for.
peter_retief 2 days ago 0 replies      
I used a fish tank pump with surgical tubing and a moisture sensor with an arduino to switch the pump on when it got dry but as you say, its never that easy, the whole thing is lying in a drawer somewhere, it didn't make anything easier. Another project involved load cells and LPG gas bottles. That was a bit more useful but it needed to be constantly powered and the wires kept falling out, so that's also in a drawer somewhere. I still haven't made a useful project yet
neelkadia 2 days ago 0 replies      
I did almost same to feed the Rat! :D http://www.feedmyflash.in/ and also open sourced the code at https://github.com/neelkadia/FeedMyFlash
somecallitblues 2 days ago 0 replies      
I'm building a space bucket atm using Arduino instead of Pi. It's a lot of fun. There are some amazing projects on spacebuckets.com. https://www.spacebuckets.com/u/POstoned has a good Pi setup with schematics etc. I think his reddit post is very detailed with sourcecode.
un-devmox 2 days ago 1 reply      
> The first time we pumped water into our planter, the tube directed a small stream into one spot, completely soaking that area but leaving the rest of the soil dry.

I use irrigation tubing with drip line emitters in my garden. That might be a solution for you. The cool think about the emitters is that they control the flow of water and are easily positioned. I think they start at .5ga/hour on up to 10ga/hour.

betolink 3 days ago 0 replies      
This project reminds me of an installation at MOMA NY, the same concept but they used an Arduino. Now there are even kits to do it for whole garden https://www.cooking-hacks.com/documentation/tutorials/open-g...
fpgaminer 2 days ago 1 reply      
I vaguely remember that when I read about soil moisture sensors, they all needed to be cleaned and dried after every use. Basically, you couldn't leave them in the soil. Seemed odd to me, but I never dug further into it.

Is that true? If so, that would explain why the sensor doesn't work here, but leaves me wondering why I've seen so many projects try to use the sensors that way.

pavel_lishin 2 days ago 2 replies      
Questions to everyone who's tried similar things:

1. why use a water pump, instead of a gravity-fed system with a valve you could control with a servo?

2. Would a scale be able to measure soil moisture? Dump in X grams of water, wait until scale registers X/2 before adding more. (Some fiddling would be required to see how much of the water is retained by the plan as building material.)

bitJericho 3 days ago 0 replies      
The Color Computer from Tandy started because of a project called "Green Thumb". Hurray for gardeners and farmers! :)


theandrewbailey 3 days ago 2 replies      
I'm sorta thrown off by the use of "we" and "our" in the middle of this article. From the top of the article (which doesn't use we/our), I understand that this is just one guy doing this. Is there someone else in the process that I'm missing?
stevehiehn 3 days ago 0 replies      
This is inspiring. I've been thinking about hand rolling a rain water reserve style irrigation system that essentially just pulls/scrapes a weather forecast and waters my garden. Not nearly as precise as this but hopefully useful.
_devillain_ 3 days ago 0 replies      
This was a hilarious read (as well as informative). Fantastic work, fantastic writing!
ajarmst 2 days ago 1 reply      
I dislike the use of the "robot" for anything that doesn't have autonomous movement (those were "RC Wars", not "Robot Wars"---although I, for one, would have delighted, at a safe distance, in someone strapping a circular saw to something autonomous). Nor is watering a single indoor potted plant the same thing as "gardening." An actual Gardening Robot would have been very interesting (I envisioned something trundling around with a spade and actuators for pulling weeds when I saw the title).

I actually love the Pi---one is now my primary computer---but it seems to have created a niche for "let's add a website and database to my really trivial control systems project" that I'm not sure really advances much of anything.

LordKano 3 days ago 0 replies      
I have been thinking about making a homebrew Phototron using an arduino, grow lights, temp+moisture sensors and a water reservoir.

The addition of the camera makes me think that a Pi might be a better solution.

banderman 2 days ago 0 replies      
I am excited to see projects like this. I believe that it will be important to be able to deploy fully autonomous food production facilities to Mars ahead of any colonization effort.
pjc50 2 days ago 1 reply      
On this subject, does anyone know what the state of the art is in using machine learning to identify (and preferably eliminate) weeds?
soheil 2 days ago 0 replies      
I once setup something similar but with solar panels in Mojave desert to water a peach tree. Needless to say that didn't work oht as planned.
water42 2 days ago 1 reply      
the codebase is really elegant. thanks for open sourcing everything!
crb002 2 days ago 1 reply      
Why computer vision of plant stress is better than fancy sensors.
z3t4 2 days ago 0 replies      
it would be interesting to see if its possible to make a closed system. a black box that gives tomatos ...
Thoughts on Insurance ycombinator.com
463 points by akharris  22 hours ago   367 comments top 56
sbarre 21 hours ago 14 replies      
> After paying for broker commissions, fronting costs, reinsurance, customer service, claims processing, theres often around 50% of the original premium dollar left to pay claims which is the primary purpose of an insurance company.

What about shareholders? One of the biggest problem I have with insurance companies as for-profit enterprises is the inherent conflict of interest that comes from trying to service claims and customers as best as possible and turning a profit for shareholders.

I've always felt that insurance companies should be run as not-for-profits, or at the very least co-ops..

Don't get me wrong here, still pay the employees and the executives competitively (you want things to run efficiently and by talented teams so you need to attract top talent), but otherwise the whole enterprise should be working hard to make sure every other dollar goes to helping the customers who pay the premiums, and that's it.

6stringmerc 22 hours ago 3 replies      
Disclosure: I've worked in both P&C and Reinsurance for several years. Also on Wall Street for a couple years. Also in Healthcare for a couple years.

Overall I think this is a nice briefing on the state of the insurance market in the modern economic landscape. It is extensively regulated with rates set and various nuances. All of this, of course, comprises part of a grand "data set" that looks quite appealing to modernization.

Unfortunately, I think there should be a strong expectation that the market (industry) will both be openly hostile to "disruption" oriented attitudes a la Uber, but laugh at any ability to raise capital to compete at any meangingful level.

I applaud your interest in perhaps improving a legally sanctioned form of graft (I prefer Mutual Organizations myself). Conversely, my experience leads me to laugh a little because I've seen the numbers and the complexity behind the scenes. I've got no interest in the industry beyond the paycheck it provided, but it is quite fascinating in numerous respects. Just the naming conventions alone once you get to Bermuda is a trip. Good luck.

socrates1998 20 hours ago 7 replies      
The biggest problem I have with insurance is the inherent disproportionate power relationship between the company and their customer.

Essentially, the moment a customer becomes more trouble than they are worth, they are dropped. This is true with other types of industries, but if your health insurance drops you when you get cancer, you can't get more health insurance, and you die.

Same with house insurance, car insurance, ect...

And it causes death or financial disaster all too often.

Some would argue that "this doesn't happen" or "it's illegal".

1) It happens ALL THE TIME.

2) It's illegal, but if you don't have the means or education to fight it. You are pretty much done.

It's the fundamental nature of insurance companies to milk healthy customers while dropping unhealthy customers. It's just too tempting and they are too protected by our legal system for them to not do it.

I know this is pessimistic, but as long as you realize this fundamental imbalance in the relationship with you and your insurance companies, you can mitigate it to a certain extent.

But, really, the only way to completely mitigate it is to be so rich that you don't even need insurance.

asr 19 hours ago 4 replies      
Before you go thinking about how this applies to healthcare... health insurance in the U.S. is different from other types of insurance and personally I think we'd all be better off it were not even called "insurance."

Health insurance is a mix of pre-paying for predictable and certain expenses with tax-free dollars, a transfer/entitlement system to ensure that more people can afford insurance (by design, your premium does not match your expected risk--either you are pooled with others at your employer, or your exchange account is subject to rating band requirements which means, for example, that in many states old people can only be charged 3X more than young people even though old people are likely to be much more than 3x more expensive to insure), and actual insurance. I'm not sure what percentage of your premium reflects the cost of actually insuring you against uncertain future health events, but it's far from 100%.

This is an interesting article, and some of it applies to healthcare in the U.S., but much of it does not.

wyldfire 22 hours ago 14 replies      
I've always wondered about US health insurance -- why can't the physician give me quotes about my personal obligation for various treatment options? It's frustrating that as soon as it's time to come up with a bill, poof there it is but prior to the bill being generated all I get is shrugs?

Is it because the insurance coverage algorithms are too complicated? Because the different entities involved in a single treatment plan is too complicated to navigate? Because physicians feel that cost is orthogonal to medicine and they prefer not to be involved/prefer to recommend the ideal treatment based on a predicted outcome? All of the above?

It feels like if there were a particular hospital group / physician group that had this feature, they would attract a lot of attention. Just imagine, "Your initial differential diagnosis will not exceed $150 and we'll discuss treatment options or more conclusive diagnostic tests afterwards."

All I've heard so far are physicians who don't accept insurance but instead have a straightforward "menu" for common items, which is interesting but not what I think most people want.

Animats 17 hours ago 2 replies      
Insurance companies have been into information processing in a big way since paper and pencil days. The first company to buy a commercial computer was Metropolitan Life.

In commercial insurance, there's a question of how intrusive the insurance company should be. My favorite insurance company, The Hartford Steam Boiler Insurance Company, established in 1866, was finally bought out by Munich Re a few years ago. Hartford Steam Boiler insures boilers and equipment in industrial plants. Most of their employees are inspectors. If you want to buy a policy from them, they come out and inspect the equipment. They give you a list of what you have to fix. Then they come back to inspect after everything is fixed. Then they sell you a policy. They also inspect again, randomly and unannounced. Cut corners on maintenance, and HSB cancels your policy. The premiums are low, because boilers inspected by Hartford Steam Boiler don't blow up.

Most companies hate that, even though the premiums are lower.

Spearchucker 4 hours ago 0 replies      
I've done a lot of systems design and development in the insurance industry, and it's a vast space with opportunities not just for insurers.

The most recent thing I worked on was a pricing and activation engine that sat behind a web site that acted as a broker for a number of insurers. That isn't new, but it was new for this market - life insurance. As such my employer was a single provider on a panel of providers.

The web site that provided the panel brought a number of innovations - one of them being an underwriting SaaS. Panel participants are able to enter their underwriting crown jewels into a 3rd party web site, secure in the knowledge that their IP wasn't going to be leaked or shared with others on the panel.

There were many more efficiencies that the model enabled, which were never realised (well, not by the time I left) because the SaaS provider couldn't reach financial agreements with some of the providers.

Greed was (is?) something that risked torpedoing the most innovative thing I've seen in life insurance, ever.

All that to say I agree that this market is brimming with opportunity. The market is so incredibly broad, and deep, and so complex... Regulation is definitely a thing, but hardly an impediment. I have, for example, spent many years in this industry, but never worked for (or with) reinsurers. I have a long-standing suspicion though, that that market is so convoluted that the front-line insurer can conceivable be it's own reinsurer, after having passed through like, 15 other reinsurers...

togasystems 21 hours ago 1 reply      
After having spent the last 3 years in insurance with Allay trying to take on employer health insurance costs by making it easier for smaller groups to become self insured, I have found that the regulations are cumbersome but not huge blockers. The regulations are there to protect people and for the most part do that job correctly.

The hard part about this industry is that there is no single incumbent to disrupt, but thousand of very small businesses who have personal relationships with their clients. Also whereever you jump into the process, you have to deal with companies who do not value technology as much as the HN crowd would. These companies still print out PDFs and have automated very little of their business. No matter how fast you make your software, you are the behest of the companies below and above you in the chain.

If anybody wants to nerd out on the insurance industry, my contact is in my profile.

gwintrob 22 hours ago 2 replies      
We're working on the modern commercial insurance brokerage at Abe (https://www.hiabe.com/).

Aaron's right that too much of the industry runs on pen and paper. It's confusing for the buyer and a massive headache for brokers.

Most of what we're building is behind the scenes to make the brokerage way more efficient. If you're an insurance expert with ideas to leverage tech or engineer interested in man+machine symbiosis, I'd love to chat (gordon [at] hiabe.com)!

toddwprice 21 hours ago 0 replies      
Healthcare needs to be funded through a non-profit community trust whose first priority is to secure the highest overall health for the population. Insurance is motivated by profit which will always seek to squeeze more money from the system as its first priority.
frabcus 17 hours ago 1 reply      
Surprised not to see mention of the fundamental modern tech problem with regard to insurance.

As soon as you have big enough data, and artificially intelligent enough algorithms, the insurance becomes too predictive.

The whole point of insurance is to pay people when rare, bad events happen to them. If an insurer can predict well enough who will be the victims, it can refuse them insurance, and hence remove the entire purpose and benefit of the insurance.

This is the flip side to moral hazard. What does it even mean to offer commercial insurance, if it is only offered to the people who least need it?

This is least bad, for example, with a car predicting you're a bad driver as you can improve behaviour. It is really problematic with data such as gene sequencing, which you can't do anything about.

Only way out I can see is compulsory insurance, levied as a tax. Or maybe non-profit or Government AIs, trained to find a sweet spot between moral hazard and its opposite?

johnobrien1010 1 hour ago 0 replies      
One thing I'd add is that the potential for new technology to better identify risks also is the potential to nullify the need for insurance. It is fundamentally the uncertainty over which house will burn down that leads to the need for insurance; if anyone had a 100% accurate model, the folks who were definitely not going to have their house burn down would not need insurance.
gilsadis 17 hours ago 0 replies      
It's an excellent article and spot on. Insurance has remained fundamentally unchanged for centuries. In order to really change it, a complete re-architecture is needed. Insurance should be fully digital, hassle free, transparent and fair.

It's hard to really change it when you're not a fully licensed and regulated carrier. An MGA can create a beautiful UI on top of an old insurance product, but eventually, consumers will meet face to face with this old insurance product (for example in claims), and it's going to be the same old experience again.

As many of you stated here, there's an inherent conflict between the insurance company and the insured. Until that changes, the experience will stay the same.

In Lemonade, we are changing that. Will love to get your feedback. Here's how it works - https://www.youtube.com/watch?v=6U08uhV8c6Y&t=9s

Disclaimer: I'm head of product at Lemonade (lemonade.com)

zstiefler 21 hours ago 1 reply      
While much of what Aaron wrote is spot on, one thing missing is the importance of distribution. This a challenge for any startup, but is more pronounced in a regulated industry like insurance with largely homogenized products and with serious restrictions on how you can legally distribute your product (see Zenefits).

If you consider how most people and small businesses buy insurance, they typically make purchasing decisions one a year at most. As such, you need to get in front of them at the exact moment they want to purchase. GEICO and Progressive have done this really well, but have effectively bid up the cost of online advertising to make it prohibitively expensive. This is also why agents are such a powerful force in the industry (and because they effectively provide carriers with an initial underwriting screen which they don't need to file publicly).

It's important to get the product right, and there are many flaws with most P&C insurance today (chief among them that the forms haven't really changed in the past few decades), but I'd encourage any entrepreneurs to make sure they have an answer on distribution before spending time on product.

Disclosure: I've spent a lot of time looking at this as founder of a P&C insurance startup a few years ago.

SmellTheGlove 20 hours ago 0 replies      
Nice writeup. Certainly scratches the surface a bit (and I think that was the intent). I've spent the bulk of my career in the industry, both in P&C and A&H, and there is a ton of opportunity. You correctly conclude that it's not because we're idiots, even though some days I'm kind of an idiot.

Again, tons of opportunity, with some barriers that aren't unique to the industry but maybe unique to what the tech community might encounter (and this is by no means an exhaustive list) -

1. Capital requirements: You need a dumptruck full of money to do much in this industry, at least if your intent is to write or reinsure business, but even to a lesser degree if you're working in ancillary services.

2. Regulatory environment: Assuming US operations, 50 states with 50 different sets of regs need to be okay with what you're doing. In addition, federal law comes into play in certain spaces (GLB, HIPAA, etc).

3. Distribution: The current model compensates every layer of the distribution model very well. It's not as easy as you might imagine to disrupt when entrenched interests are all making a ton of money AND are interdependent upon their neighbors in the value chain to continue to do that. You can't just hack a piece off because that bothers their neighbor, who in other circumstances might otherwise be a competitor, but has a shared interest. The relationships get complex.

None of this is fatal, but you must navigate it and play by many of the rules, particularly with #2. As Zenefits learned very publicly, the insurance industry and its regulators weren't going to let someone do what Uber did to the taxi industry - which was essentially to operate in the grey/black and just ignore the calls to stop. Insurance is a subset of financial services, and financial services is a powerful industry. For any startup, my advice is to build compliance and regulatory relations in from day 1. It's the least exciting thing you'll ever do, but is extremely important. Anyhow, I'm rambling. By no means do I want to discourage anyone from trying, I'm just trying to highlight some of the big things that are "different" in our space.

I'm particularly interested in the data piece because that's what I've done and built a career on in this industry, but it never struck me as sexy enough for YC. Appreciate the article.

joshfraser 17 hours ago 1 reply      
My issue with insurance companies is that they're conflicted by design.

As typically publicly traded companies, they have a fiduciary responsibility to their shareholders to maximize profits. And the only way to maximize profits is to deny coverage.

The purpose and value of insurance is amortizing risk. There are very few things that I think should be controlled by the federal government, but a single payer health care system is one of the few that actually makes sense from an incentives perspective.

In the words of Charlie Munger who's spent more time thinking about insurance than almost anyone: "Show me the incentive and I will show you the outcome."

Why does the US pay the highest prices for mediocre health care? Perverse incentives.

sharemywin 20 hours ago 1 reply      
You also might want to check out guidewire. They're the company that's basically taking over the software side of P&C for carriers.


If you're going to innovate on the software side in P&C you would need to figure out what they're not doing or could not do that would allow you to get ahead of them.

Entangled 20 hours ago 7 replies      
What if a thousand people get together and put $100 each in the blockchain in order to cover emergencies for the unlucky 10% that may need aid in times of distress?

A society can't function if more than 10% of the population is in distress and that's exactly the purpose of insurance, the many helping the few, not the other way around.

But since insurance moves a lot of money (just like banking) and there is propensity to fraud and corruption, then it is highly regulated making it hard for newcomers to disrupt the market.

Start from the beginning again using technology, allow tech insurers to take from 100 and serve 10, that's it. Who to serve? That's the easy part, those in need.

osullivj 20 hours ago 2 replies      
The article spends many paragraphs on the complexities of the B2B and B2C players in insurance, their distribution channels, and how they get paid. It spends one sentence on noting how the pricing of risk is based on actuarial models, and then moves on. The pricing of risk is absolutely key to the entire industry, and it is stone age! It's all done in spreadsheets.
kozikow 22 hours ago 1 reply      
Unless you start with covering something completely new, I think it makes sense to start as analytics provider for insurance. Becoming a carrier is expensive, and MGA can only sell existing policies already created by a carrier. If you sell existing policy you are at severe data disadvantage comparing to existing insurance companies, even with superior tech.

Shameless plug: At https://tensorflight.com we are working on P&C insurance and we focus on analytics for commercial properties mentioned in the article. Please get in touch at ( kozikow [at] tensorflight.com ) if you are interested in the subject.

chiph 18 hours ago 0 replies      
The article mentions "Personal Cyber Insurance"

Many insurance companies will insure against data loss as part of your homeowner's policy (with lots of exclusions..) and I've been wondering why they don't partner with one of the online backup services (SpiderOak, Backblaze, etc) in order to preserve their customer's data to reduce the risk of a payout, or at least how much they have to pay on a claim.

sonink 19 hours ago 1 reply      
I might not know too much about the topic, but imo American healthcare system has a lot of learn from the Indian healthcare system. My guess is that America should simply copy-paste India's model and it should be good to go.

For the resources that it spends on healthcare, the Indian healthcare systems offers perhaps the most efficient system in the world. There is insurance if you want, but you can choose your health providers in the free market too.

Hospitals, Doctors, Medicines, Tests, Procedures, Post-op care/services - everything can be comparison shopped. And if you have more time than money, you can show up at any one of the almost-free govt. funded hospitals to get treated by who is often a very good doctor.

The inefficiency of the American system might as well be a result of extensive litigation around healthcare, but I suspect that its simply an oligopoly defended by pocketed politicians.

I would guess that for any hospital expense above a few thousand dollars, and for someone who cant afford, it might make a lot of sense to just hop on a plane to India.

ksar 20 hours ago 1 reply      
> For most of the insurance world, the hardest and most important thing to find is effective distribution and customer acquisition.

This is absolutely true. If you look at how regional, family-owned P&C carriers got their start, you'll find they started as brokers. These brokers found a profitable niche (call it motorcycles in California, or non-standard auto in Texas) that they wanted to own, moved on to become MGAs, and then admitted carriers offering multiple lines of business.

If you can find a way to own the customer as a distributor, you own the life-blood of the entire business downstream. As a result, P&C insurance companies are huge spenders on marketing (GEICO, spends $1.7B/year). It would be game-changing if this cash was used to provide utility to their customers above and beyond the insurance transaction.

I'm of the opinion that insurance premiums could be the new ad dollars - used to create products that promote lock-in, and much more consumer-centric insurance companies.

Mz 20 hours ago 0 replies      
Re the comments on Data in this piece: Yes, insurance is an industry suffering from perpetual information overload. I worked at a large insurance company for 5.25 years. I processed claims and we relied heavily on systems for looking up endless information. They would totally overhaul the system for doing that every few years. I think it occurred two or three times in the time I was there and I heard from other people that, yes, this seemed to happen every two to three years. It had all kinds of problems. What was supposed to be a new and improved, more user-friendly system meant that all the old timers who had figured out how to find everything were now lost, even with training. It also meant that some things were not compatible and not findable anymore. It was truly bad.

I have a certificate in GIS. As a claims processor, I worked with multiple databases all day long. This included such things as looking up names and addresses for providers (doctors, hospitals, etc -- the people providing medical care). I suggested we create a GIS based system to make this easier. The idea did not fly.

But if you want an area of opportunity in insurance tech, try finding solutions to some of their back end problems. Managing information overload is a serious and ongoing problem in the insurance world. I was there 5.25 years and I only know a thimbleful of information about the industry. Laws and regulations vary by state. In order to sell in all 50 states, you need to get licensed in every state individually and we had to be able to look up "state exceptions" which were laws that impacted how claims were paid in each state and on and on.

People in insurance are absolutely not stupid. It is just overwhelmingly complicated and no one can keep up with all of it. And, yes, it is very fragmented. But if you want a good business opportunity in this space, consider trying to build back end solutions to help them better manage the information in which insurance companies are simply drowning.

artur_makly 1 hour ago 1 reply      
really dumb idea but..

What if the insurer and healthcare provider were one and the same?

Wouldn't that remove the fraud 'game' , increase transparency, and lower premiums?

or will it actually make it 2x worse for the patient?

asmithmd1 18 hours ago 0 replies      
I am surprised no one has mentioned:


They are doing three things:

1. Innovating on customer acquisition with a mobile app

2. Building out a for-benefit corporation insurance carrier one US state at a time.

3. Innovating on the business model.

The conventional, for-profit side of the business takes a flat 10% of premiums and "buy" insurance from their captive for-benefit carrier. Any money left over in the for-benefit insurance carrier is donated to the charity of your choice.

honestoHeminway 6 hours ago 0 replies      
Denial of Service by Obfuscation, Denial of Service after exceeding a certain amount, Denial of Service by Way-Lawyering, these is all information that needs to be compiled from anecdata into solid numbers, that anyone can instantly acess and that allow for a metric on insurance quality.
sharemywin 20 hours ago 1 reply      
The biggest thing I can say is don't underestimate underwriting. There's a reason a lot of carriers focus on specific areas. There's a hidden customer acquisition cost in that a group of new customers that has a bunch of bad business that needs to be weeded out.

If you think about it a customer knows their own risk better than anyone. So, you're betting that you can judge their risk better than them and your competitors over and above your cost to manage paperwork, regulations, money and fraud.

grizzles 19 hours ago 0 replies      
Nice article & loved Kyle's article about the thorny issues surrounding customer acquisition.

I grew up around the industry because my father worked his entire career in it. I've always found it fascinating. Actuarial science is so underrated. I'm also an entrepreneur and I've previously thought about entering it to disrupt it. If there is one thing I'd add to the main article it's this:

If you look at the data vs all other industries, Insurance is the most profitable industry out there. That's not bad considering the product is manufactured out of thin air.

This is perpetually interesting because to me it kind of destroys one of the central tenets of capitalism. It's a very old industry. If anyone can conjure stuff out of thin air, you would expect there would be much more competition / lower margins in it by now. But there isn't. So the industry is really effective at erecting moats to keep innovators at bay, In my opinion, this is main "value" (cough) that the regulators play in the market. The industry gets away with so much, and it needs reform, but I'd bet against it ever happening. This situation in replicated in every country around the world. That's why it's hard to be a true entrepreneur in the industry. You need an investor with deep pockets to enter the market. You need to verify your brilliant customer acquisition strategy works. Then you need to wage war vs the incumbents.

Bonus content for actuarial nerds looking for a good chuckle:http://reactionwheel.net/2017/06/venture-follow-on-and-the-k...

jpswade 19 hours ago 2 replies      
My favourite article on Insurance is one by the BBC[1], which questions what makes gambling wrong but insurance right? It also points out how it's history is steeped in shipowners and traders meet in shipping agency Lloyd's of London's coffeehouse in 1863.

1. http://www.bbc.co.uk/news/business-38905963

TimPC 15 hours ago 0 replies      
Attempted tl;dr version:

Insurance is a horrendous industry and someone should reform it because it sucks and could do so much better.

Insurance exists in a horrendous sales market where connections are required to multiple providers across multiple roles in a complex sales process.

Insurance is subject to highly fragmented regulatory complexity making it difficult for any company to scale to multiple markets. This not only affects you, but it affects many of the providers you build partnerships with, making each new market often require new relationships.

So my takeaway is don't do insurance unless you have something so overwhelming that you can't do it no matter what. Also expect that Idea & Working Technology gets you 1% of the way there and 99% will be building relationships, sales channels and regulatory compliance.

sytelus 11 hours ago 0 replies      
When you go to clinic for a simple injection for allergy response they charge you $1500. The whole process takes 2 hours of stay in a clinic and doctor spends about 15 minutes with you. When a healthy women goes to deliver normal baby in hospital, she typically gets charged $2000 per night while doctor spends less than 1 hour with patient and cost of material is almost nothing. Does this feels data or technology problem to anyone?

In USA, private entities who owns clinics and hospitals have recognized that they can charge virtually anything in the name of providing "gold plated" care. All the while insurers have recognized that they can charge person $1500 premiums a month without anyone noticing because premiums are directly taken of out people's paycheck by employers. Vast majority of employed people have no clue that they are actually paying for their premiums which are almost same as what they would have paid as an unemployed person. Instead people assume that they get low cost insurance because their employers has some sort of great "group deals". So neither party has any incentive to reduce cost. Its neither a data nor technology problem, its how markets are completely eliminated out of equation by creating a law that employer needs to provide insurance even though employers are simply transferring cost to end consumers.

If government creates a law that employers must not provide insurance and everyone must buy their own insurance on open market according to their preferences and budgets, I believe cost of insurance would fall dramatically in very short amount of time. This is how lot of developed countries operate and cost of equivalent quality care there is usually 10X lower precisely because of this.

elihu 20 hours ago 0 replies      
It seems like for health insurance at least, the problem of trying to optimize insurance premiums based on any risk factor you're legally allowed to discriminate on is a problem that just sort of goes away in a medicare-for-all style single-payer insurance system. Everyone pays taxes and receives health care services -- you just have to make sure your taxes rate is set properly to cover the costs incurred.

(I think governments are sometimes unfairly criticized for inefficiency because they incur higher costs simply by not having the option to cherry-pick their customers the way private companies usually can. There are of course cases where the criticism is justified.)

hardtke 16 hours ago 0 replies      
I'm surprised he didn't call out title insurance as a place for potential innovation. This insurance pays out at something like 1% of premiums collected. Historically, the customer acquisition was handled by making a large kick back to the real estate agent. The research involved (look for contractors liens and such) seems like something that could be automated using NLP.
corford 14 hours ago 1 reply      
Would be interesting to know how many of these points map over to the insurance landscape in Europe? I know next to nothing about either but, from talking with Stateside friends, have the (possibly wrong!) impression that the insurance market in Europe is less sclerotic and a little more modern/competitive/innovative.
iamnotlarry 8 hours ago 0 replies      
I have come to believe that the insurance industry in the U.S. has become part of the cause of expensive medical care. I am not opposed to the idea of insurance, but here are some of the problems I see in how it works today.

I do not believe that insurance companies actually negotiate the price down. I have seen too often that the price goes up when I produce proof of insurance. While it may seem counter-intuitive, insurance companies have perverse incentives to push prices up. Even the most well-meaning insurance company will opt for certainty over potentially inexpensive. But I don't think they are well-meaning to begin with and I think that it isn't even structurally beneficial to them to reduce costs. For an example of when lower costs benefit insurers, see the price of identical healthcare purchases outside the U.S. Products manufactured inside the U.S. magically become cheaper in Belgium and Nigeria.

The insurance industry has worked with the medical industry to obfuscate the cost and price of healthcare. This is in both industries' interests. If I pay $100 for a doctor to splint a broken finger, I'm happy I had insurance to make it so cheap. I'll let my insurance company pay the other $1700 and I may never even look at the bill that shows a total cost of $1800. I won't see that they charged $25 for the popsicle stick, $25 for the tape, $5 for the pen used to fill out the medical chart, $30 for the receptionist, $75 for the nurse, $300 for the PA, $200 for the xray, $200 for the radiologist, $50 for the ibuprofen, etc. If I actually had to pay that bill, I would argue it line by line. I would absolutely refuse to pay large portions of the bill. $5 pens that they reuse for the next patient are obviously an unethical ripoff. But I don't have to pay those line items, so I don't fight it. I just have to pay a $1500 premium every month. And I'd be crazy to refuse to pay that.

We all know that insurance premiums are high because healthcare costs so much. This is because of lawsuits and freeloaders. This is how price obfuscation shifts the battle. We don't question anybody's integrity over the $5 pen because the discussion is all about lawsuits and freeloaders. If I had to pay the bill myself, I'd ask why I got billed for 4 x-rays, but only received 3. That 4th one that didn't turn out because the radiologist put the cartridge in backwards? I'm not paying for that. I'd ask why the receptionist seems to be making $500/hr. I'd ask what the lab fee was for. I'd comb my bill and question everything. $1800 is a lot of unplanned expense for me. $100? Not so much.

If I had to pay my medical bill in full then file for reimbursement, I'd fire a company with 90 day turn arounds and switch to the insurance company with 5 day turn arounds. And the next time I got stung by a bee, I'd maybe not run to the emergency room. Yes, that could lead to bad outcomes. But the current practice has its own bad outcomes.

If I had to keep fighting an insurance company for timely reimbursement and started noticing that I pay $1500x12 every year to cover 80% of my ~$6000 medical expenses, I'd start getting all kinds of ideas. I would think about putting $1000 a month into an HSA and look for a $40000 deductible policy. I'd think about pooling my HSA with 100 of my closest friends to start a healthcare co-op where I could borrow up to 1x my balance for 1 year at low interest to pay for big medical bills and then get a $60k deductible policy instead. Then, in five or six years, when I've built up $30k, I'd drop my monthly payment into my HSA to $500, and just let the balance earn interest. Then, when the unthinkable happens, I drain the account, borrow another $30k, file a claim on everything else and spend the next year paying back the $30k and lining up contingencies for any event that will happen before I can build my HSA back up. I'd start daydreaming about pooling my $12k/year with others to build and staff a very small clinic that could take xrays and treat bee stings. Just 100 founders paying the same $1000/month could generate $1.2M per year. That could secure a 15-year $3M bond to build the clinic. It could pay a long way into supplies and staffing, too. One could even imagine that members get free services (ignoring the $1000/month) while non-members could pay $1800 for a broken finger.

In other words, I'd figure out how to rely as little as possible on insurance companies. And the money I now spend to fund their paperwork and profit would stay in my account.

chasely 19 hours ago 0 replies      
If you had domain-level expertise that would be relevant to providing analytics for P&C (re)insurance providers/consumers, but didn't know how the insurance market works, where would be a good place to start?
jarsin 14 hours ago 0 replies      
- Cyber Insurance -

Lifetime offer: 99% discount if you switch to Linux.

kapsteur 22 hours ago 1 reply      
mooneater 21 hours ago 2 replies      
Insurance can be a catalyst for a million other things. Amazing how often the answer to "why cant we do innovation X" is: because insurance.
incan1275 18 hours ago 0 replies      
Yonatan Zunger posted a really awesome piece about the origins of insurance, that complements this nicely.


nafizh 12 hours ago 0 replies      
Is there any startup focusing on the healthcare insurance industry?
bitJericho 21 hours ago 0 replies      
The best solution is the complete outlawing of health insurance. It'll take riots for that to happen.
aaroninsf 20 hours ago 1 reply      
Some context:

Americans pay 2.5x more than the average first-world single-payer system.


This funds demonstrably and starkly worse outcomes on the metrics we should collectively care about: life expectancy. Incidence of chronic disease. Infant morality.

The reason we pay much more for much less is simple: for-profit health care optimizes for profits, not outcomes. That is exactly what it has done; exactly what it will do.

The public sector already does single-payer in this country with an order of magnitude less overhead cost than the private sector.

When I discuss these facts with conservatives and libertarians, I usually discover that most of this is decreed "fine," because we disagree about whether or not health care is a right.

For that reason, I've stopped suggesting that it is, and focused on another stark fact:

Failure to provide basic health care to all, through straightforward means, merely means that it is provided through partially hidden means (like ER rooms) at vastly greater cost, not least because emergency care cannot perform preventive and chronic care, which in many cases would provide better outcomes for two orders of magnitude less cost.

Morality is a matter of instinct and choice. The costs of the existing broken system are however obvious, and render any defense IMO irrational.

jheriko 12 hours ago 0 replies      
> Insurance is fundamentally a data problem.

this is a special sentence. it reads roughly as "i am ignorant of reality" to me. :(

very american.

youdontknowtho 21 hours ago 1 reply      
Health insurance is just broken. The only way it gets better is if something forces them to view the entire country as one actuarial pool.
geggam 19 hours ago 0 replies      
Insurance is Socialism for Profit
blazespin 8 hours ago 0 replies      
Insurance is fundamentally a 'data' problem, if you can numerically analyze human interaction. I know insurance brokers, they definitely set up their book based on what they know about their customers.
EGreg 11 hours ago 0 replies      
There's one cool thing about insurance providers, whether private or public:

They are the ones most aligned with YOUR well being.

They pay when something happens.

They get more money if things stay good or get better.

In fact, I would argue that public option / single payer insurance is MORE interested because of lack of competition (induced supply, as it were) that will destroy the profits if costs are driven down.

In other words, a public insurance program actively would try to improve public health, not to beat competitors but to lower its costs and reinvest that into MORE public health innovation.

Want to help the world? Make startups that improve public health (diet apps, exercise apps, whatever with measurable results) and have the insurance companies fund it.

cmurf 14 hours ago 0 replies      
There's a substantial difference between something like automobile, homeowners, or renter's insurance; and health care "insurance" which is arguably wrongly named.

Automobile insurance is mandated by law in all 50 states (Maine lets you self insure but you have to stipulate you can do this, in effect it's still a mandate). Homeowner's insurance isn't required by law but might be required by the mortgage company. And renter's insurance may be required by a landlord. All of these only protect you against almost random events, or at least pretty much unpredictable events.

Health care insurance is weighted by being one part warranty and two parts aging payment plan. How many people use their health insurance every single year in some form or another? Dental? Eye doctor? Cold or flu? That's not insurance. It's a payment plan for predictable services. Even pregnancy is predictable. Something like cancer, congenital disease, are not predictable. So we've conflated a bunch of things into a giant payment plan, with nearly a dozen layers of middle men, every one of whom expects a profit cut. And so in the U.S. we have the most expensive per capita health care cost in the industrialized world, and yet not everyone is covered. Basically we are so stupid, that we are willing to pay more for health care services to deny other people being covered, and to jack ourselves off proudly that this is the best health care system in the world and it's (mostly) free market. It's a big fat stupid hand job.

So I would not consider health care insurance in the same category as other insurance types. It's a sick care payment plan, and if it weren't for the government making it illegal now, you'd still see lifetime maximum caps. Get too sick? Go fuck yourself. Die, don't die, go to the ER, charge it to a credit card, declare bankruptcy, we do not care about you. We don't care. We don't give a fuck about you.

That is the American healthcare and insurance system.

pasbesoin 17 hours ago 1 reply      
Two comments from me (amongst many I might make):

1. A while ago, as a somewhat pointed quip, I started pointing out that in the U.S., for health insurance we pay "whole insurance" rates (yes, that term applies to life insurance, but is an apt comparison in terms of pricing) for what is essentially "term insurance" (which, in the life insurance segment, is quite a bit less expensive, because after the term, you're uninsured).

2. I tried to do eveything "right". Carried insurance when I could, when I was younger. Carried it with the best company and policy I could get. Didn't resent that I was paying it much more than I was getting out: It's "insurance", and if I'm healthy, I've already "won". I'm glad the money can go to those who need it (patients, I was assuming).

Despite doing everything "right", my last corporate job got off-shored. At a time when the individual insurance market was going from ridiculous to impossible. I managed to hold onto some form of insurance by dint of personal connections, to watch exclusions slapped onto my individual policy (for things the doctor had said he wouldn't treat -- too minor, risk/benefit not worth it -- while I was still on the corporate group policy). To watch my premium rates triple in three years.

The ACA came along just in time to rescue me from the looming vast pool of the uninsured. Which, once you entered, you had a very hard time exiting.

I never received a subsidy from the ACA. It simply allowed me to participate.

And by the way, the biggest injury to it? It was written to make insurers whole for their losses, until they had a handle on their demographics. Well, under Congress, funding is always a separate exercise -- the budget process -- from the laws and activities that that funding in turn enables.

And the Republicans simply refused to produce the funding mandated in the ACA. The numbers I heard from and expert in the field is that insurers were getting about 17 cents on the dollar for losses the ACA law told them would be reimbursed.

In other word, it wasn't simply imperfect and in need of improvement. A starting point, per those who negotiated and accepted some pretty terrible compromises in order to get it passed and some improvement in coverage started.

It was actively sabotaged.

Ok, I find I have a third point to make right now:

3. When the ACA was proposed, I believe it was not at all a "left" initiative. It was meant to be pretty middle of the road.

Around that time, 2007-8, and for some years prior, businesses were crying bloody murder at the year over year increases in health care costs they were facing. Often well into the double digits, year after year.

And they were saying, 'We can't compete with foreign competition from societies that have universal health care, where their costs are perhaps half ours and are increasing at single digit percentage rates.'

The ACA was meant to address "conservative" business problems as much as "social" individual and community problems.

All that was sabotaged, dealt with in bad faith, for the sake of the political and power agendas of the selfish.

By the way, the ACA has produced substantial improvements on the group insurance side of things. For individuals, and for businesses, by holding cost increases down.

And insurers have made good profits on these changes and their group policy business. Money they don't include in their accounting when they turn around and describe all the losses they've faced under the ACA marketplace plans (which are classified as "individual" insurance plans, not part of the traditional group policies/business).

CptJamesCook 20 hours ago 0 replies      
It's surprising to see an article about insurance without references to black swan risks.
wyager 22 hours ago 5 replies      
> Insurance carriers set their rates based on actuarial models designed to predict the likelihood of future events.

Sure, until the government says that you are no longer allowed to set rates based on actuarial data; maybe you aren't allowed to charge people appropriately if they have some expensive "pre-existing condition", or because of certain "protected" but statistically relevant characteristics. This throws a big fat spanner into the whole expectation value thing, because you're no longer an insurance company but a weird privatized subsidy pool that isn't allowed to make expectimax decisions anymore.

Your clients aren't allowed to expectimax anymore either; if they catch on to the fact that they're actually subsidizing someone else, they're not allowed to form their own rational insurance pool (because it would violate restrictions on "discrimination", e.g. ACA section 1557) and if they choose to opt out of the irrational "insurance"/subsidy market you hit them with a hefty fine.

I encourage everyone to look up expectation values for payin/payout of medical insurance for different customer types under current regulations (start with https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1361028/ ). TL;DR If you're a reasonably healthy man below retirement age, you're getting screwed hard. Maybe society can collectively agree that this is a good idea, but we should at least be honest and stop referring to it as "insurance". Actual insurance markets without coercion are a net positive for all participants, rather than a convoluted scheme to (effectively) transfer money from one demographic to another.

billylo 21 hours ago 0 replies      
Insurance is essential. They are like brakes in your car. You incorporate them not because you want to go slow, but because it enables you to go fast.
pkamb 20 hours ago 0 replies      
Any options for renting vintage furniture by owner? I'd like to pick up more nice mid-century pieces when I see them cheap at estate sales, but won't have the space for them for a while...
pcunite 22 hours ago 1 reply      
Disclaimer: You may not agree with this. A downvote does not change that. A comment and intellectual conversation goes much further, however.

A quote from the article: The insurance industry is built on mitigating downside risk.

Maybe they should have everyone agree to use computers with sensors which can create accurate maps of everything people have ever done, might do, or should do. Citizens will be so much better off with this new system of fairness. We'll call it insurance instead of goverment.

My own private basic income opendemocracy.net
404 points by deegles  1 day ago   261 comments top 30
maerF0x0 1 day ago 13 replies      
> I lucked into money.

The basic point is that he gets paid more whilst not doing more "work". For those whom "Work" is the only lever they use to convert into money, they see inequitable work:cash ratios as "unfair" (a morality statement).

Market economies do not function on work, but on value. He did the same work in a high value scenario. A glass of water provided in the middle of a developed city is worthless and thus free. A glass of water provided at the right time in a desert is invaluable and thus expensive.

My takeaway is this: Always meet the highest value need you can, and as well create additional value by helping others to meet higher values than they current do. Low placed people may not be able to "work" their way to upper echelons, but they can invent, intuit or otherwise create high value leverage of their's and others' work. I dont see it as unfair that a person making $1 a day is unlikely to become a deca-billionaire in their lifetime (total mobility); instead I want to ensure they have some mobility so that they can always better their situation, maybe by an arbitrarily selected 2x multiplier.

erentz 1 day ago 5 replies      
Wow I am astounded that all the comments so far are trying to tear down this person and what he is saying, and very transparently because there seems to be something threatening about this idea that he got where he has through luck. Why do people find this idea so threatening?

I got where I am today due to luck. I did work hard. But I know other people who work hard too. Who are just as capable. They just didn't get assigned to the class with the really awesome, motivating teacher. Or randomly picked out of the pile by a recruiter for a call. Or happen to wind up working somewhere where they made friends with a guy who was well connected. Etc.

To discount luck's role in how people get successful is a huge blind spot.

rrggrr 1 day ago 7 replies      
Gosh. OP made me sad with his thesis.

"1. For owners, work is optional."

... As an owner I wish this were true. For me and most owners I know work is mandatory, and its 60 hour weeks and a lot of sleepless nights. There is no safety net. There is no unemployment. I get no workman's compensation. I have no employment law protections. Its, as one HN comment said some time ago, "a suffering contest" a lot of the time.

"2. ...your stuff will keep on making money forever."

... Again, I wish this were true. Lifetime income producing investments are hard to find where volatility, fees, inflation, taxes and life's circumstances don't erode value. Sure, the truly wealthy are set. But most business owners are not truly wealthy, and the exit strategies just aren't there in many industries.

"3. We can get entrepreneurship without the enormous rewards to ownership we have today"

... Anyone whose dealt with the day-to-day grind of owning a business would be especially offended by this comment. A firehose of pain, suffering and risk flows to the owner in the form of litigation, regulators, employees, vendors, customers, bank officers, etc., etc. I've equated it to a "lazy susan" dispensing aggravation in every conceivable form. Its the reward - if you can get it - that makes it worthwhile. Try just getting divorced as an owner and you become an instant convert to rewarding ownership.

Yes, rent-seeking monopolies are bad. Yes, more needs to be done to create opportunities for wealth creation by employees - to give them the freedom to say "no". No, punishing ownership isn't the answer.

UPDATE x2: I'm really not confusing types of owners. The author's thesis is broad not only in the article, but in his works in general. Access to passive income investments (eg. REIT's) is among the few places small business owners can go to rent-seek. Rent seeking is bad under a few sets of conditions but doesn't deserve author's indictment.

jacknews 1 day ago 0 replies      
Owning land is certainly a private basic income.

Consider that land prices (and therefore rents) basically reflect the surrounding economy. By buying land you are essentially profiting from the work of everyone else in that economy, forever, and only taxed minuscule property taxes.

IMHO https://en.wikipedia.org/wiki/Henry_George had the correct solution; tax land (though not the structures developed on it) at it's full market rental value. And the concept should apply to ownership in all resources.

matt_wulfeck 1 day ago 8 replies      
> My first big lucky break happened in 2009 when Georgetown University hired me as a philosophy professor on their campus in Qatar

Apparently going to high school, studying and passing the SAT, applying and being accepted into university, studying, passing, and graduating university is just luck.

Have we we reached peak luck yet? Is there nothing ever we can do to improve our circumstances and our lives?

This is like a new age of economic predestination, where everything has been predecided for you by God and there's nothing you can do. Calvinist and BI proponents apparently have a lot in common.

OJFord 1 day ago 7 replies      
> [in the US] people who dont need their income are taxed less than people who do

I'm not American; I don't know the tax structure. But I can't even imagine that this could possibly be true, or what is true that is meant.

Without looking it up, I would assume that the USA subscribes to progressive taxation, under which the richer pay more tax in both absolute and relative terms.

Even if it has a flat rate of tax, (a structure, incidentally, that makes far more intuitive sense to me) the richer are still paying more tax in absolute terms, and equal in relative terms.

What Earthly metric is the author using?

bigtunacan 1 day ago 1 reply      
Can someone clarify this piece for me? I don't understand how the author is able to completely avoid taxation on the rents.

>My wife and I dont need the money we make from owning most of the business. We live off the salaries of our jobs, and reinvest virtually our entire share of the business. These reinvestments count as losses, and so officially we have never made any income or paid income taxes on our share of the business.

greedo 1 day ago 2 replies      
Ignoring the logical fallacies the author makes, one thing jumped out at me. He's not buying the average home in South Bend. If he's only paying $180/year in property tax, the assessed value of his average home is around $9K. A quick look at Google shows the median price (from Zillow, so a grain of salt is recommended) of $65K. If he and his brother are buying up $9K homes, his average mortgage is under $50/month. I'm sure that he's doing the socially responsible job of only charging his tenants just a little over his full costs...
Mz 1 day ago 1 reply      

Guy in financially comfortable position feels guilty, decides his education, willingness to live in Qatar, etc shouldn't really be worth this much, completely discounts the idea that anything actually matters, it is all merely "luck" and that's it.

Sounds like an existential crisis, not really a good commentary on the concept of basic income.

stuaxo 1 day ago 1 reply      
Seems to state the obvious to me, but looking at the comments on the article, I'm in a minority.
dlwdlw 14 hours ago 0 replies      
I'm having a lot of fun lately thinking about everything in cryptotokens:

Everyone gets 3 basicMealTokens and 1 basicFun token.

However, in SF, there are only sfMealTokens which require 20 basicMealTokens.

In Vietnam, 1 vietnameseChickenPlatterToken is only half a basicMealToken.

Tech workers are paid in techWorkTokens which can be exchanged for 200 basicMealTokens each or 100 basicFunTokens, or 1/1000th of a Tesla token.

Token transfers have friction so it is advantageous to maintain a surplus of tokens most easily transferrable (least hops) to what you want.

To ease this, SF maintains a sfBasicToken and makes the market to transfer tokens from all over the world to sfBasicTokens. sfBasicTokens have an advantage in that you can pay for things easily instead of paying 67.34 redditStatusTokens.

Some old geezer is willing to sell his house for a legacy usdToken, a token so old it had a physical representation and no historical memory.

The basicFoodToken ideas doesn't pan out. SF starts giving out sfBasicFoodTokens instead. However large numbers of people realize an arbitrage mistake and live like kings in Thailand, taking advantage of the buying power of sfTokens due to the Thailand people wanting to make it big with their startup dreams in the Bay Area.


objectivistbrit 1 day ago 1 reply      
He talks about the unfairness of "our economic system" but he makes his income from Qatar's economic system. Qatar likes to spend their oil money on western professors to give themselves an air of legitimacy.

As for renting out property, it's never effortless or risk-free. If it was, the rate of return would drop to that of other minimum risk assets (e.g. treasury bonds).

brianwawok 1 day ago 1 reply      
Hey, South Bend made an article on the internet and it wasn't even about our mayor!

Rental income here is a bit weird because you have in general a very poor town, with a very expensive private university with old money funding houses for students. I bought my house here for a fair bit less than it would cost to rent something half the size.

tiku 1 day ago 1 reply      
So this is the part 2 to the "how to retire at 34" article from a few years back..
edpichler 1 day ago 1 reply      
The article is about something that should be obvious, capitalism: money goes to the capital owner.

Of course this has a lot of problems, but it's the best we have today. On this system, the rules of the game are: we need to work and spend less than we earn, and use the money to accumulate more capital, an not buying more things.

DailyDreaming 1 day ago 0 replies      
I agree with the author's points. There have been, almost certainly, Einsteins who have lived and died in poverty without ever having tasted a book. I feel like a basic income for everyone is supportable and would give people falling to an unknowable fate the lift they need to go beyond surviving.Less work seems to need to be done with tech advances today, but the only thing a job replacing robot seems to do is take the incomes of two u people and give it to the person who owns the robot. The slippery slope is that we will eventually do this for most jobs, if we survive long enough as a species, and I agree we need change to evolve as our society does.
dugluak 1 day ago 0 replies      
Its not two big lucky breaks it should be actually three big lucky breaks. Being born as a human is the first lucky break every human being on the earth gets. If you think about it 'merit' is man made. Nature most of the time works on pure chance.
imgabe 1 day ago 0 replies      
> No one is going to give tens of thousands of dollars every year to some guy who owned a couple of houses and said he knew how to manage more,

Well, not no one. There are hard money lenders who will loan money to "some guy" to do more or less that.

arwhatever 15 hours ago 0 replies      
Well success is either based entirely on luck or entirely on merit, and I'm sure we'll figure out which of the two it is real soon now.
hartator 1 day ago 2 replies      
I think it's awesome to make critics about how capitalism is bad by using Qatar where basically no freedom exists and everything is controlled by the state. Not even mentioning actual slavery.
tunesmith 1 day ago 2 replies      
What I got out of this was the suggestion to fund Basic Income by taxing unearned income. I'd like to read more on how the numbers would actually work out. If that were the only funding source of BI, then how much impact would that be on the rich, and how much benefit would that be for the people that could use BI?
ableton 1 day ago 1 reply      
If you live in America you can do what you want. You can study hard and get a scholarship to college even if you are really poor as a kid. I know a poor Mexican immigrant who's kids all got college degrees for free. It doesn't really matter if you grow up poor (in America). There are many people who are very rich precisely because being poor as kids drove them to work hard. What does matter is if you grew up in a stable family. If you don't have that you get really whacked out paychologically, and it can affect people for their whole lives. People look to the government to tax and spend to solve social problems. However, oftentimes the government is the cause of such problems in the first place. For example, we see a huge number of people in prison today. Well studies show that 60% of people in prison grew up without a father in the home. The government today actively promotes divorce and children out of wedlock through paying single mothers. The "compassion" has raised a generation of young people who are lost. America has more money than ever but our families are falling apart, torn up by divorce and pornography, and our young people are paying the price. The government should focus on supporting strong families, and you would watch and see so many major social problems evaporate. I know that I am where I am today because of my dads positive influence on me. Without his support I would be nowhere right now. Every kid needs a loving family. It's time the government starts trying to promote that.

Of course in Quatar they have serious human rights issues that need to be resolved. I'm just talking about in America.

pavement 1 day ago 1 reply      

 nor is there a combination of bad choices that could conceivably put me in their position from my starting point.
Guess again.

dredmorbius 1 day ago 0 replies      
Read this essay (which is, as it goes, pretty good), then go back and take a look at, say, the first book of Adam Smith's Wealth of Nations. In particular the prices of commodities, labour, stock, and rent, as well as the factors influencing wages of labour (chapter 10).


It doesn't hurt to consider Ricardo as well.

From Ricardo, we get the Iron Law of Wages, and the Law of Rent. Key to understand is that these move in opposite directions:

* Wages tend to the minimum subsistence level, all else equal.

* Rent rises to claim the surplus value afforded.

That is, wages are based on the input costs, whilst rent is based on the output value (use value). The third element, price (sometimes "exchange value"), is what's at question.

(I distinguish cost, value, and price as three distinct concepts. This is a long-standing question, and in my view, a grossly confused one, in economics. They're related, but not deterministically. In the long term, C <= P < V. Bentham's "utility" is an exceptionally red herring. More, very much in development: https://redd.it/48rd02)

Note that this means that rent is determined by the pricing behaviour. If you're a "business owner" but you're not capable of extracting rents, you're either selling commodities or labour, you're not collecting rent. The term here is in the sense of economic rent. Simply "owning a business", without the economic circumstances which give rentier power, isn't sufficient -- don't confuse what it is you're doing with the systemic construct in which you're doing it. Weiderquist emphasises this point specifically, several commenters here clearly haven't grasped it.

Another elided discussion: rents are associated with access control, and can be thought of as authority over some (virtual or physical) gate. They're a natural element of any networked structure, in which nodes and links exist, most especially where some nodes have higher value, or control more flows. I believe though can't yet show that all cases of rent involve a fundamentally networked structure, again, virtual or real. My concern is that this may be a reflexive definition, I'm trying to determine that it is or isn't.

Another element of this is compensation for labour. Smith lays out five elements determining this, and I find them durable and comprehensive. In the Widerquist's case, the combination of requisite skills, and comparative unattractiveness, of teaching in Qatar, allow him to claim both a high wage and favourable working conditions (including a lighter-than-typical workload). This falls straight out of Smith. That is, he earns his salary "by doing a job few others are both willing and able to do".

If you're looking at the macro view, realise that these don't scale. That is, the innate and acquired capabilities to teach at University level are not widely distributed through the workforce, and the lack of appeal of various sorts of work is sufficient to dissuade (or prevent, or disqualify) others from taking part in it.

There are other elements here as well: the complimentarity of time and skills, on the one hand, with money, on the other (the classic business partnership). Tax structures (and who they benefit). The relationship of wealth and political power. Smith again: "Wealth, as Mr Hobbes says, is power." One of the shortest and clearest sentences in WoN, incidentally.

Spooky23 1 day ago 0 replies      
This guy sounds like a real shithead.

He makes his living teaching the children of totalitarian aristocrats at a faux university, is saving for the future on the backs of the poor back at home, and gets a kick out of having indentured servants at his beck and call.

I wouldn't shed a tear when his little empire of dirt collapses.

obstinate 1 day ago 3 replies      
On some level, this is basically "capitalism 101." If I don't need my resources now, I can put them to work so that I can later benefit from them. This leads to me having productive assets, which pay me some return.

If I could not acquire productive assets, there would be much less reason to save. And it's unclear how one would save, as banks would likely not exist either. You can solve this problem somewhat by having a centrally planned economy. But then you have the problems that hit those.

samnwa 1 day ago 0 replies      
There can only be so many people at the top.
the_cat_kittles 1 day ago 1 reply      
for the articles description: "how [the economy] rewards people who own stuff rather than people who do stuff"

i cant believe i never summarized my own feelings that succinctly. of course the reality is it does both, but i think its way out of balance towards ownership rather that doing.

moeymo 1 day ago 2 replies      
Property tax is $15 per month in South Bend for a house? Wrong.Capital gains is taxed lower than income? Wrong -- sometimes it is, sometimes the other way around. Multiple factual errors.
moeymo 1 day ago 1 reply      
Property tax is $15 per month for a house in South Bend, IN? lol.Capital gains is taxed lower than income? Wrong ("it depends").I stopped reading there.
The .feedback scam everythingsysadmin.com
471 points by 0x0  12 hours ago   131 comments top 43
wodenokoto 11 hours ago 8 replies      
Wow, the author is not exaggerating when saying it is a scam.

Looking at the .feedback page for Stack Overflow, it says at the top, in fairly large letters "We make Stack Overflow, where the world's developers get answers, share knowledge & find jobs they love. Also proud builders of the @stackexchange Q&A network."

Then at the absolute bottom, in small, washed out print it says "Disclaimer: This site is provided to facilitate free speech regarding Stack Overflow. No direct endorsement or association should be conferred."

So, users are not supposed to confer that a page claiming to be by the makers of stack overflow, are associated with SO? Beyond any reasonable doubt, the people behind that site are trying to scam visitors.

If the creators of the .feedback pages are also the TLD owners, it seems obvious to me that they should face legal charges and be stripped of the TLD.

finnn 8 hours ago 1 reply      
The whois data for domains under this TLD is kind of interesting

whois stackoverflow.feedback returns a phone number with a CNAM of STACK OVERFLOW, and the same address listed on https://stackoverflow.com/company/contact

whois google.feedback returns the phone number +1.1978600872, which is not a valid US area code last I checked. The address appears to be a PO box in Seattle.

facebook.feedback has the same bogus phone number and the same PO box as google.feedback.

myaccount.feedback (where they send you if you try to vote on anything, presumably other things too) has a residential address on Mercer Island, WA and a Google Voice number listed.

Calling the Google Voice number results in a voicemail where a person identifies themselves as "Jay" (presumably Jay Westerdal[0] of Top Level Spectrum, Inc who owns the .feedback TLD[1])

Another thing of interest is that myaccount.feedback encourages you to login with Google/Facebook/Twitter/LinkedIn

[0]: https://icannwiki.org/Jay_Westerdal[1]: https://icannwiki.org/Top_Level_Spectrum,_Inc.

emidln 4 hours ago 2 replies      
I worked for a feedback company in the past. Nobody cares even when they pay for a feedback widget and get the reports emailed/pushed to their managers. Most of the clients didn't even login to the site or collect the feedback (sign up for notifications, download a csv, etc). AFAICT, companies paid for the feedback widget to check some box on a yearly powerpoint along the lines of "manages user feedback". It's a good racket to be in once you realize that nobody cares about your product and just wants a good-looking widget that is effectively write once. You just build good sales and relationship managers then hit up enterprise.

I'm 90% sure that the entire concept of the feedback form/widget is only still with us as a way of channeling user rage into non-social platforms. "Here, complain into this void that won't hurt our public opinion!"

xg15 8 hours ago 0 replies      
Comment from the article:

> I would like to add that IMHO all new TLDs are scam. Brand owners are forced to register their name in many of them.

.feedback is just the tip of the iceberg.

I think this is the core issue here. I remember that a few of ICANN's new TLDs caused similar issues. The idea of selling TLDs to companies for use at their discretion is horrible enough, but they also seem to be completely ignorant of this problem.

This feels as if ICANN is trying to become the new FIFA.

(I don't really have much empathy for google etc having to pay $600 a year, but the same problem could also hit lots of small sites. Also with the current trend, TLDs seem to loose all structure and meaning and just turn into another brand vehicle or trade asset)

phantom_oracle 9 hours ago 1 reply      
On the topic of scams, has anyone ever discovered what ICANN did with the enormous amount of money it raised by selling these TLDs at a premium?

I recall .blog being purchased by Automattic for 15-20M (that being just 1 example).

ICANN is to tech what the SEC is to finance: A corporate revolving-door where you join to do your corporate-masters bidding and then move back to your 7-figure job.

jommi1 8 hours ago 0 replies      
Looks like they were already found out in March...? (0)How are they still up??

The company behind this (1), has .realty .forum .contact .pid and .observer and all the "sell" pages lookalike. Holy fuck this looks dirty.

(0) https://www.icann.org/uploads/compliance_notice/attachment/9...(1) http://www.topspectrum.com

toni 6 hours ago 0 replies      
The .feedback sites also load a "fingerprint" script which tries to gather all kinds of info from your browser.


SimonPStevens 9 hours ago 1 reply      
The real scammers here are ICANN and the atrocious way they handled the generic TLDs.
accountyaccount 2 hours ago 3 replies      
They also own .sucks... of which they were charging $2500 per domain. I don't know why these manipulative TLDs are allowable without considerable regulation. This latest example seems to be more directly extortion, and is likely illegal in many places.
BLanen 49 minutes ago 0 replies      
Couldn't companies do a class-action on the fact that they pre-registered domains with trademarks belonging to other companies but not giving those domains to those companies?

There's precedent for getting a domain based on trademark from someone else.

speedplane 7 hours ago 0 replies      
GTLDs were introduced only a few years ago, but it's clear the implementers were conflicted by reckoning the freedom of the "earlier internet" with what the internet has become today. They wanted to honor the freedom of the DNS system. But they also recognized the internet has changed enormously. Today, the internet is largely ruled by large brands, just as most mainstream media is.

The GTLD system is a pretty poor compromise between those two positions. User's don't have freedom of the early DNS system, and brands now have a huge enforcement burden to mange. Registering sites like coca-cola.fun and coca-cola.shopping are almost certainly not allowed, and will be shut-down eventually, but it now costs Coke a sizable sum to monitor and take down the infringing websites.

_jal 10 hours ago 1 reply      
Ick. In a way, it is a sort of corporate version of those mugshot sites.

I understand a lot of other scams. (Not condone, but I see the attraction of running it.) This one just seems like a lot of work to put into something that I really don't see the targeted companies choosing to go along with. Seems like borrowing a ton of grief - at the very least, they'll be hearing from a bunch of crabby lawyers.

captainmuon 10 hours ago 0 replies      
I wonder how they can get away with this, and not be sued into oblivion.

On an amusing note, the pages barely load for me. Could this be the first time the HN hug of death took down a whole TLD?

gumoro 7 hours ago 1 reply      
Go to http://feedback.feedback/ and write a sternly-worded review, that'll teach'em.
Marat_Dukhan 9 hours ago 3 replies      
Interestingly, amazon.feedback redirects to amazon.com, so Amazon did pay
chadcmulligan 8 hours ago 1 reply      
just lodged a complaint on icann.feedback, should fix it.
dasil003 9 hours ago 2 replies      
Reminds me of GetSatisfaction back in the day. It was never as much of an outright scam as this, but they definitely had that protection racket vibe about it.
metaphor 55 minutes ago 0 replies      
> If they do discover it, they are given a choice: Pay $20/month to receive the feedback, or pay $600/year to take the web site down.

Corporate-driven lawsuit for trademark violation?

SippinLean 1 hour ago 0 replies      
If you click through "Claim this site" there doesn't seem to be anything in the form of verification, seems that just anyone can claim them. The price for SO was $750 though, not $600.
r1ch 3 hours ago 0 replies      
Aren't all ICANN accredited registrars bound by the UDRP[1]? It seems like any domain registered "on behalf" of a company could easily be taken down / claimed by the UDRP process.

[1] https://www.icann.org/resources/pages/help/dndr/udrp-en

ollybee 5 hours ago 0 replies      
This seems a similar model to the .sucks TLD. They offered domains to company owners for $2500 before offering them to the public.
UnrealIncident 8 hours ago 0 replies      
They're also fingerprinting every user. I noticed because it caused Firefox to hang for over a minute.
crispytx 1 hour ago 0 replies      
Scammers like this give capitalism a bad name.
kierenj 2 hours ago 0 replies      
I hit the "Claim" button, and it's gone to a page asking for my CC details. I wonder if anyone could claim it..
babuskov 6 hours ago 0 replies      

Looks like they got some great feedback.

rrauenza 9 hours ago 3 replies      
Could the major browser providers start just blacklisting these kinds of TLD's? Or grey listing with huge warnings?
sergiotapia 9 hours ago 0 replies      
Fucking genie's out of the bottle now isn't it. :)

A bit late to call party foul when absurd tlds are available more and more. Ocean of piss and all that. Embrace the chaos.

publicopinionsa 9 hours ago 1 reply      
I might want to include that IMHO all new TLDs are trick. Brand proprietors are compelled to enroll their name in a large portion of them.
martin-adams 5 hours ago 0 replies      
This feels very similar to Trustpilot in my opinion. The format and description of the company (in the first person) is extremely similar. The only difference is branding. Trustpilot don't have a domain looking like the company, but companies can pay to take control of their brand on the platform.
jheriko 4 hours ago 0 replies      
this is the price of having no or nearly no regulation or standards.

as much as i appreciate the arguments for why that is the case, its important to recognise the cost of that philosophy in practice

still i hope there is some legal action that comes against them.

Beltiras 1 hour ago 1 reply      
www.amazon.feedback redirects to amazon.com login page. They paid the extortion?
TeMPOraL 5 hours ago 0 replies      
See the thread here[0] for info on who's likely behind it. It seems to be operating since at least 2015 already.

[0] - https://news.ycombinator.com/item?id=14669058

warent 9 hours ago 0 replies      
An unethical and shamefully pathetic extortion scheme. This is really a disappointing and completely uninnovative, destructive direction to take the internet in
snakeanus 5 hours ago 1 reply      
This is why people should move to things like OpenNIC or to a DNS-less address scheme like the ones used in namecoin/tor/i2p. The ICANN scammers should be stopped.
ara24 4 hours ago 0 replies      
Browsers should just s/.feedback/.com/, problem solved.
King-Aaron 6 hours ago 0 replies      
There's some stellar reviews popping up there already
blazespin 8 hours ago 2 replies      
You can't do "or pay $600/year to take the web site down". There are laws against that. It's called extortionate. That being said, they could be more subtle like Glassdoor / Yelp / etc. People would actually have to find .feedback domains useful for that to happen however. My sense is that this will just go nowhere, no business will be made, and it's all a lot of sound and fury over nothing.
natch 8 hours ago 0 replies      
Can the TLD just be blackholed by major DNS providers?
yellowapple 8 hours ago 0 replies      
I'm normally thrilled to hear about new TLDs.

This is an exception. It'd be great in theory, but the preregistered scam domains are absolutely ridiculous.

I hope the likes of Google and Facebook come down on these sites, and come down on them hard.

babuskov 6 hours ago 0 replies      
I wonder, how does this compare to Yelp?
oh_sigh 9 hours ago 2 replies      
How is this a scam? The result is obviously just a ratings site which never purports to be the sites referenced in their URL...why does it matter if you type in google.feedback or feedback.com/google or google.feedback.com into your browser? No reasonable person could be misled to believe that google.feedback/ was at all related to Google.

edit: Okay, I guess it is a pretty messed up website. If you go to reply to something, it gives you the ability to "officially" reply (as, say Google) for a mere $29 per message. This doesn't seem like extortion, but it is a pretty horrible business idea.

lemagedurage 7 hours ago 2 replies      
A site that provides uncensored free speech aimed at companies is considered a huge scam? It looks like the comments are not manipulated, and I appreciate that authenticity more than practices happening around Google, YouTube, Facebook etc.
Flawed reporting about WhatsApp theguardian.com
351 points by Calvin02  1 day ago   119 comments top 16
te_chris 1 day ago 5 replies      
For those who don't know, newspapers often have a Reader's editor whose job it is is to criticise and be the voice of the readers inside the newspaper. In this case, this is written by that person for the Guardian after what would appear to be a thorough investigation of the matter.

There's a lot of people here saying this should have happened faster, they're likely right, but also, given how extensive and thorough this is, it is more likely an example of how old-school editorial rigour just takes a lot of time.

gtf21 1 day ago 5 replies      
I think this is a really thorough mea culpa which is quite impressive, given the frequent failure of other newspapers to publish a prominent apology when they have got things far more wrong than this.
idlewords 1 day ago 0 replies      
It astonishes me that the Readers' Editor, someone with long experience in journalism, thinks retracting this story would mean taking it offline as if it never happened.

Frankly, I think this is a weak response. There is nothing in this investigation they could not have cleared up in January; instead, they dawdled and now they equivocate.

acchow 1 day ago 2 replies      
Pretty much every single person I know outside of the Bay Area and not working in tech believes that the government and the corresponding corporations running the service are reading all of their messages on:

* Whatsapp* FB Messenger* iMessage* Hangouts

They also all believe that the police can look at their Facebook posts because they have special access.

This is precisely why there was minimal reaction to the Snowden revelations - what revelations?

ngrilly 1 day ago 1 reply      
The linked Guardian's article doesn't really explain why they were wrong. This article by Moxie, designer of the Signal Protocol, is great: https://whispersystems.org/blog/there-is-no-whatsapp-backdoo....
omnifischer 5 hours ago 0 replies      
Sad that the writer (calling herself investigative journalist - https://twitter.com/manisha_bot ) of the flawed article does not even mention the amended article in her twitter account.
chicob 1 day ago 0 replies      
Speaking of security, the new possibility of a Google Drive backup for WhatsApp messages and files has been quite overlooked imo.

This backup is not e2ee, which means that if the other part is backing up data in Google Drive, then at least part of you WhatsApp history is not e2ee. Yes, it might be encrypted whithin Google Drive by whatever secure methods Google chooses, but not by you.

jancsika 1 day ago 0 replies      
From the open letter:

"People believe that you perform due diligence on matters critical to their lives and safety."

And at the bottom of the open letter many security experts have signed in support. That is, "signed" in the colloquial sense.

Small digression-- let's say a person tasked with reviewing a story in the Guardian is not an expert in security. They would really love some way to start with one or two security experts they know and trust and "fan out" to other experts based on their relationships.

Is there a quick and easy way for the journalist to do that by looking at the names of cryptographers listed at the bottom of a webpage?

Also: can someone explain what "due diligence" means? Is it the expectation here that a journalist not only report what would look reasonable to a non-journalist reader, but also use their considerable skill to ensure that they present their readers with verifiable facts, to the best of their ability? Even if it takes a considerable amount of time and effort on their part? Even if verifying the evidence relies on clunky, cumbersome tools that no one wants to spend time using?

EternalData 1 day ago 0 replies      
Good on them for admitting to all these flaws. I'm especially interested in the fact that government officials seem to be citing articles to push people to certain communication channels.
vzaliva 1 day ago 0 replies      
It is funny how while reading this article establishing Guardian's screw up I was nevertheless asked twice to give them money.
lern_too_spel 1 day ago 1 reply      
Maybe one day they'll issue a correction for their PRISM reporting too. The solution is exactly the same as the solution in this case: the editors should demand that the journalists verify their claims with experts.
danjoc 1 day ago 4 replies      
This is not the original Title. Submitter is editorializing via title. Please don't do that on Hacker News.
majewsky 1 day ago 2 replies      
Hint: If site is empty for you, remove "amp." from domain name.
eehee 1 day ago 3 replies      
This entire conflict just seems completely absurd to me - why on earth are the 72 "experts" who signed the open letter so quick to trust WhatsApp without access to the source code?
Delivering Billions of Messages Exactly Once segment.com
436 points by fouadmatin  22 hours ago   128 comments top 35
newobj 21 hours ago 4 replies      
I don't want to ever see the phrase "Exactly Once" without several asterisks behind it. It might be exactly once from an "overall" point of view, but the client effectively needs infinitely durable infinite memory to perform the "distributed transaction" of acting on the message and responding to the server.


- Server delivers message M

- Client process event E entailed by message M

- Client tries to ack (A) message on server, but "packet loss"

- To make matters worse, let's say the client also immediately dies after this

How do you handle this situation?The client must transactionally/simultaneously commit both E and A/intent-to-A. Since the server never received an acknowledgment of M, it will either redeliver the message, in which case some record of E must be kept to deduplicate on, or it will wait for client to resend A, or some mixture of both. Note: if you say "just make E idempotent", then you don't need exactly-once delivery in the first place...

I suppose you could go back to some kind of lock-step processing of messages to avoid needing to record all (E,A) that are in flight, but that would obviously kill throughput of the message queue.

Exactly Once can only ever be At Least Once with some out-of-the-box idempotency that may not be as cheap as the natural idempotency of your system.

EDIT: Recommended reading: "Life Beyond Distributed Transactions", Pat Helland - http://queue.acm.org/detail.cfm?id=3025012

mamon 22 hours ago 5 replies      
"Exactly once" model of message is theoretically impossible to do in distributed environment with nonzero possibility of failure. If you haven't received acknowledgement from the other side of communication in the specified amount of time you can only do one of two things:

1) do nothing, risking message loss

2) retransmit, risking duplication

But of course that's only from messaging system point of view. Deduplication at receiver end can help reduce problem, but itself can fail (there is no foolproof way of implementing that pseudocode's "has_seen(message.id)" method)

alexandercrohde 12 hours ago 1 reply      
Here's a radical solution. Instead of becoming a scala pro akka stream 200k engineer with a cluster of kafka nodes that costs your company over $100,000 of engineering time, technical debt, opportunity cost, and server costs, just put it all in bigtable, with deduping by id....

Enough of resume-driven-engineering, why does every need to reinvent the wheel?

bmsatierf 21 hours ago 0 replies      
In terms of connectivity, we deal with a similar problem here at CloudWalk to process payment transactions from POS terminals, where most of them rely on GPRS connections.

Our network issues are nearly 6 times higher (~3.5%) due to GPRS, and we solved the duplication problem with an approach involving both client and server side.

Clients would always ensure that all the information sent by the server was successfully received. If something goes wrong, instead of retrying (sending the payment again), the client sends just the transaction UUID to the server, and the server might either respond with: A. the corresponding response for the transaction or B. not found.

In the scenario A, the POS terminal managed to properly send all the information to the server but failed to receive the response.

In the scenario B, the POS terminal didn't even manage to properly send the information to the server, so the POS can safely retry.

falcolas 21 hours ago 1 reply      
So, a combination of a best effort "at least once" messaging with deduplication near the receiving edge. Fairly standard, honestly.

There is still a potential for problems in the message delivery to the endpoints (malformed messages, Kafka errors, messages not being consumed fast enough and lost), or duplication at that level (restart a listener on the Kafka stream with the wrong message ID) as well.

This is based on my own pains with Kinesis and Lambda (which, I know, isn't Kafka).

In my experience, better to just allow raw "at least once" messaging and perform idempotant actions based off the messages. It's not always possible (and harder when it is possible), but its tradeoffs mean you're less likely to lose messages.

travisjeffery 21 hours ago 1 reply      
Kafka 0.11 (recently released) has exactly once semantics and transactional messages built-in.

- Talk from Kafka Summit: https://www.confluent.io/kafka-summit-nyc17/resource/#exactl...

- Proposal: https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+E...

StreamBright 22 hours ago 0 replies      
"The single requirement of all data pipelines is that they cannot lose data."

Unless the business value of data is derived after applying some summary statistics, than even sampling the data works, and you can lose events in an event stream, while not changing the insight gained. Originally Kafka was designed to be a high throughput data bus for analytical pipeline where losing messages was ok. More recently they are experimenting with exactly once delivery.

ju-st 22 hours ago 3 replies      
52 requests, 5.4 MB and 8.63 seconds to load a simple blog post. With a bonus XHR request every 5 seconds.
skMed 16 hours ago 0 replies      
Having built something similar with RabbitMQ in a high-volume industry, there are a lot of benefits people in this thread seem to be glossing over and are instead debating semantics. Yes, this is not "exactly once" -- there really is no such thing in a distributed system. The best you can hope for is that your edge consumers are idempotent.

There is a lot of value derived from de-duping near ingress of a heavy stream such as this. You're saving downstream consumers time (money) and potential headaches. You may be in an industry where duplicates can be handled by a legacy system, but it takes 5-10 minutes of manual checks and corrections by support staff. That was my exact use case and I can't count the number of times we were thankful our de-duping handled "most" cases.

openasocket 14 hours ago 0 replies      
So there's a lot of talk on here about the Two Generals Problem, so I thought I'd chime in with some misconceptions about how the Two Generals Problem relates to Exactly Once Messaging (EOM). WARNING: I'm going mostly on memory with this, I could be completely wrong.

EOM is NOT strictly speaking equivalent to the Two Generals Problem, or Distributed Consensus, in an unreliable network. In distributed consensus, at some given point in time, A has to know X, A has to know B knows X, A has to know B knows A knows X, ... It has to do with the fact that the message broker is in some sense the arbitrator of truth, so the consumer(s) don't need full consensus. In an unreliable network, you can have EOM. http://ilpubs.stanford.edu:8090/483/1/2000-7.pdf gives some examples of how that works.

HOWEVER, you can't have EOM when the consumers can fail. If a consumer fails there's no way, in general, to tell if the last message it was working on was completed.

There are a couple of edge cases where you can still have EOM. For instance, a system where you have a message broker A, and a bunch of consumers that read messages x from that queue, compute f(x), and insert f(x) onto message broker B, where f(x) may be computed multiple times for the same x (i.e. if f is a pure function or you don't care about the side effects). This system can implement EOM in the presence of an unreliable network and consumer failures (I think it can handle one or both of the message brokers failing too, not 100% sure) in the sense that x will never be in broker A at the same time as f(x) is in broker B, f(x) will never be in broker B more than once for the same x, and any y in B had some x that was in A such that y = f(x).

siliconc0w 17 hours ago 1 reply      
Was thinking a 'reverse bloom filter' could be cool to possibly avoid the RocksDB for situations like this- turns out it already exists:https://github.com/jmhodges/opposite_of_a_bloom_filter

I love it when that happens.

philovivero 13 hours ago 0 replies      
tl;dr: Clickbait headline. Exactly-once delivery not even close to implemented. Typical de-duping, as you've seen and read about hundreds of times already, is what they did.
sethammons 20 hours ago 0 replies      
"Exactly Once"

Over a window of time that changes depending on the amount of ingested events.

Basically, they read from a kafka stream and have a deduplication layer in rocks db that produces to another kafka stream. They process about 2.2 billion events through it per day.

While this will reduce duplicates and get closer to Exactly Once (helping reduce the two generals problem on incoming requests and potentially work inside their data center), they still have to face the same problem again when they push data out to their partners. Some packet loss, and they will be sending out duplicate to the partner.

Not to downplay what they have done as we are doing a similar thing near our exit nodes to do our best to prevent duplicate events making it out of our system.

squiguy7 21 hours ago 2 replies      
I wonder how they partition by "messageID" they use to ensure that the de-duplication happens on the same worker. I would imagine that this affects their ability to add more brokers in the future.

Perhaps they expect a 1:1 mapping of RocksDB, partition, and de-duplication worker.

ratherbefuddled 20 hours ago 0 replies      
"Almost Exactly Once" doesn't have quite the same ring to it, but it is actually accurate. We've already discovered better trade-offs haven't we?
iampims 22 hours ago 1 reply      
If the OP doesn't mind expanding a little on this bit, I'd be grateful.

> If the dedupe worker crashes for any reason or encounters an error from Kafka, when it re-starts it will first consult the source of truth for whether an event was published: the output topic.

Does this mean that "on worker crash" the worker replays the entire output topic and compare it to the rocksdb dataset?

Also, how do you handle scaling up or down the number of workers/partitions?

qsymmachus 21 hours ago 0 replies      
It's funny, at my company we implemented deduplication almost exactly the same way for our push notification sender.

The scale is smaller (about 10k rpm), but the basic idea is the same (store a message ID in a key-value store after each successful send).

I like the idea of invalidating records by overall size, we hadn't thought of that. We just use a fixed 24-hour TTL.

wonderwonder 22 hours ago 3 replies      
Would something like AWS SQS not scale for something like this? We currently push about 25k daily transactions over SQS, obviously no where near the scale of this, just wondering about what limitations we will bump into potentially.
linkmotif 21 hours ago 0 replies      
It's worth noting that the next major Kafka release (0.11, out soon) will include exactly once semantics! With basically no configuration and no code changes for the user. Perhaps even more noteworthy is this feature is built on top of a new transactions feature [0]. With this release, you'll be able to atomically write to multiple topics.

[0] https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+E...

jkestelyn 17 hours ago 0 replies      
Relevant to this topic: Description of exactly-once implementation in Google Cloud Dataflow + what "exactly once" means in context of streaming:


(Google Cloud emp speaking)

incan1275 21 hours ago 0 replies      
To be fair, they are upfront in the beginning about not being able to adhere to an exactly-once model.

"In the past three months weve built an entirely new de-duplication system to get as close as possible to exactly-once delivery"

What's annoying is that they do not get precise and formal about what they want out of their new model. Also, their numbers only speak to performance, not correctness.

On the plus side, I think it's awesome to see bloom filters successfully used in production. That sort of thing is easy to implement, but not easy to get right for every use case.

ggcampinho 22 hours ago 1 reply      
Isn't the new feature of Kafka about this?


robotresearcher 22 hours ago 0 replies      
> [I]ts pretty much impossible to have messages only ever be delivered once.

IIRC, it's provably impossible in a distributed system where processes might fail, i.e. all real systems.

vgt 19 hours ago 0 replies      
Qubit's strategy to do this via streaming, leveraging Google Cloud Dataflow:


kortox 21 hours ago 0 replies      
With deduplication state on the worker nodes, how does scaling up, or provisioning new machines, or moving a partition between machines work?
majidazimi 19 hours ago 1 reply      
What is so exciting about this? There is still possibility of duplicates. You still have to put the engineering effort to deal with duplicates end-to-end. If the code is there to deal with duplicates end-to-end, then does it really matter to have 5 duplicates or 35? Or may be they just did it to add some useful cool-tech in to CV?
luord 15 hours ago 0 replies      
This is interesting work. But I think I'll continue relying on at least once and idempotency. Exactly once is impossible anyway.

> In Python (aka pseudo-pseudocode)

This annoyed probably more than it should have.

gsmethells 21 hours ago 2 replies      
Why do I get the feeling this is repeating TCP features at the Message level? There must a protocol that can hide this exactly once need away. TCP doesn't create downloads, generally, that are bad and fail their checksum test, hence packets that make up the file are not duplicated.
spullara 21 hours ago 1 reply      
This isn't the solution I would architect. It is much easier to de-duplicate when processing your analytics workload later and you don't need to do so much work.
PinguTS 21 hours ago 0 replies      
That reminds me of the safety-related protocols we use since years in embedded electronics like rail-road signaling, medical, and other areas.
stratosgear 22 hours ago 3 replies      
Site seems to be down. Any ideas how big these HN hugs of death usually are? How big of a traffic spike brings these servers down?
mooneater 22 hours ago 0 replies      
Awesome story.What I would like to hear more about, is the people side. The teams and personalities involved with coming up with this new system and the transition.
pfarnsworth 10 hours ago 1 reply      
Sounds very cool. A couple of questions I had:

1) What happens if they lose their rocksdb with all of the messageIds?

2) Is their kafka atleast-once delivery? How do they guarantee that kafka doesn't reject their write? Also, assuming they have set up their kafka for at least once delivery, doesn't that make the output topic susceptible to duplicates due to retries, etc?

3) >Instead of searching a central database for whether weve seen a key amongst hundreds of billions of messages, were able to narrow our search space by orders of magnitude simply by routing to the right partition.

Is "orders of magnitude" really correct? Aren't you really just narrowing the search space by the number of partitions in kafka? I suppose if you have a hundred partitions, that would be 2 orders of magnitude, but it makes it sound like it's much more than that.

throwaway67 22 hours ago 1 reply      
... or they could have used BigQuery with a primary key on message ID.
How I learned to code in my 30s medium.com
402 points by bradcrispin  2 days ago   198 comments top 39
soneca 2 days ago 8 replies      
I started to learn to code last November at 37yo.

About 30 hours a week for two months I finished the Front End Certificate from freeCodeCamp (highly recommend the site for starters). Then I decided it was better to build my own projects with the tech I wanted to learn (mostly React) using official documentation and tutorials. This is what I accomplished in around 3 months: www.rodrigo-pontes.glitch.me

Then I started to apply to jobs. After around 4 rejections, last week I started as Front End Junior Developer (using Ember actually) at a funded fintech startup with a great learning environment for the tech team.

Very proud of my accomplishment so far, but I know the rough part is only starting.

oblio 2 days ago 2 replies      
Somewhat related, perhaps the most spectacular story of a late coder I've ever heard is that of https://en.m.wikipedia.org/wiki/George_Pruteanu (somewhat controversial Romanian literary critic and politician).

Basically, despite having a major in Romanian literature and spending a lifetime as a literary critic, with almost 0 contact with computers, he decided in his late 40s and early 50s to understand the things behind the internet.

So he picked up on his own: PC usage, internet browsing, PHP and MySQL coding, enough to make his own website and a few apps. That, starting from a point where he could barely use a mouse.

When asked during a TV show how he did it, he replied:

Like I did things for my literary criticism: I read an 1 meter [high stack] of books about the subject.

Every time I need motivation I think about that quote :)

chrisdotcode 2 days ago 4 replies      
I'm sorry, but I can't help but be incredibly cynical and jaded about this, and from reading the comments, nobody seems to have the same sentiment. If this was titled "How I learned to play the piano in my 30s", I don't think anybody would bat an eye: learning an instrument is not like joining some secret cult, and anybody can develop basic music literacy over a year or two. I also do not doubt this man's proficiency, but 30 is not old outside of tech circles. This youth fetishization in tandem with the "everybody's dog should learn to code" meme I think is very short-sighted.

Tech is wildly lucrative, is in current demand, and is not physical labor. That reduces the barrier to entry to anybody who has a laptop and an Internet connection. Honestly how many people would be so eager to learn to code if you dropped down the average tech salary to 45,000 (matching other professions)? I think far less: people seem to learn to want to code to ride the high-pay wave, not for the actual love of code.

Again, let's compare to music. Anybody can go to a guitar store and buy a 200$ keyboard. But if I took a 14-week class and afterwards had the aught to call myself a "Music Ninja Rockstar" or some other such nonsense, and start applying to orchestras and bands, I would be called crazy.

Software has eaten the world, and it's here to stay. Increasing the general software literacy is no more different than saying we should teach everybody how to read (and a good thing). However, throwing each person in a bootcamp telling them "coding is wonderful! you can master it in 5 seconds and make 200k a year!" is no different than holding a similar bootcamp for any other vocation and then wondering why the average plumber can't actually fix your house, but can only use a plunger. I sincerely hope this trend stops. This mindset is broken, and the paradigm is highly unsustainable. Where will we be in 20 years?

brandonmenc 2 days ago 4 replies      
When computers were invented, a lot of the people involved were already adults - plenty in their 40s and above. Before home computers, you didn't get to use a computer until your 20s.

Therefore, the first few waves of programmers included a lot of "already olds."

This is always overlooked as evidence that older people can learn to program.

oweiler 2 days ago 2 replies      
I've started learning to code when I was 26 and people told me I was too old and should stay with my shitty job.

Fast forward ten years and I'm a senior software engineer which gives trainings on Spring Boot and Microservices and helps companies implementing Continuous Delivery and Microservice architectures.

You may think I'm gifted but I'm actually not. I'm a very slow learner and bad at Math. I mostly program from 9 - 5 and only work on side projects when I'm feeling to (which sometimes means not doing any commits for months).

But I like what I'm doing and work hard to improve.

projectramo 2 days ago 5 replies      
This is generally a decent article about the balancing non-technical skills, and exerting effort in learning.

I found it noteworthy that the "hook" in the title is that the person started in (gasp) their 30s. Why should that be noteworthy? Why wouldn't someone start coding in their 30s, 40s or 50s?

Now it is true that starting a new profession late in life may not always make sense because, presumably, you have to little time left you might as well "ride it out" contributing what you know.

So, yes, it is unusual for a doctor to start learning mathematics in their 40s (though not unheard of: https://en.wikipedia.org/wiki/Endre_Szemer%C3%A9di), but it isn't less strange to make such a change in computer science than any other field.

bradcrispin 2 days ago 2 replies      
I once said that "I realize nothing I do in engineering will ever end up on the front page of Hacker News." Feels like a once-in-a-lifetime moment. Thank you
jondubois 2 days ago 9 replies      
I've been programming for 13 years.I started when I was 14 years old and studied software engineering at university. These days, when I take on well-paid contract work, sometimes I find myself working alongside people who only started learning to code at around 25 and never went to university.

It's upsetting for me to think of all the fun I missed out on in my early life because I was learning programming and pushing myself through university and it turns out that it doesn't even get me a higher pay check in the end.

These days, nobody cares that I'm proficient in all of ActionScript 2 , ActionScript 3, C/C++, C#, Java, Python, AVR studio (microcontroller programming), MySQL, Postgres, MongoDB, RethinkDB, PHP, Zend, Kohana, CakePHP, HTML, CSS, Docker, Kubernetes, AWS, JavaScript, Node.js, Backbone, CanJS, Angular 1, Angular 2, Polymer, React, Artificial Neural Networks, decision trees, evolutionary computation, times/space complexity, ADTs, 3D shaders programming with OpenGL, 3D transformations with matrices, image processing... I can't even list them all. I could wipe out 95% of these skills from my memory and get paid the same.

It only gives me extra flexibility... Which it turns out I don't need because I only really need two of these languages (C/C++ and JavaScript) and a couple of databases.

analog31 2 days ago 1 reply      
When I was a kid, my mom was teaching high school, and thought that she might get laid off due to declining school enrollment in the rust belt. She took a year of programming courses at a community college. The next year, they asked her to teach the course, which she did.

Most of her students were 30+, many were working in the auto industry, including assembly line workers. At the time, there were a lot of bright people working the lines because it had always been possible to skip college and land a decent middle class job at the car plants. But that was coming to an end.

Her students were taking one year of CS and getting hired into reasonably decent programming jobs.

In fact, I was also interested in programming, and learned it in school. When I went to college, my mom discouraged me from majoring in CS because she literally thought programming was too easy to justify 4 years of classroom training, and she thought that the job market for programmers would quickly saturate.

Let's just say we guessed wrong. ;-)

But at the time, college level CS was still maturing as a discipline. Many of the 4 year colleges didn't have full blown CS major programs. I'm betting it's harder now, but I honestly don't know if programming per se has fundamentally gotten any harder.

Edit: Noting some of the comments, I certainly don't want to disparage the CS degree. After all, I majored in math and physics -- hardly a turn towards a practical training. I think these are fields where you have to be interested enough in the subject matter, to study it as an end unto itself. Being able to do actual practical work in a so called real world setting is always its own beast, no matter what you study.

jarsin 2 days ago 4 replies      
What i always tell people if you find yourself naturally drawn to it then you will eventually find some level of success. If your in for just the money then you will not stick with it and it probably won't happen.

Same is true for just about most things in life.

This guy found he was naturally drawn to it. End of story.

teekert 2 days ago 1 reply      
I also learned to code after 30. At some point Excel and Origin weren't dealing well with ever increasing data sizes in my field (biology). I did an intro course on Python (2) of 3 days (basic Python and some Numpy). Back on the job I immediatly switched to Python 3, learned about Jupyter and was lucky enough to have a job where I could take time to learn (although it doesn't take much time to get back up to Excel/Origin level data analysis skills with Pandas/Seaborn/Jupyter!).

That combination is still gold for me although bioinformatics is forcing me into VSCode/Bash/Git territory more and more. I can recommend anyone wanting to do data analysis to start with the Jupyter/Python/Pandas/Seaborn combo, the notebook just makes it very easy to write small code snippets at a time, test them and move on. Writing markdown instructions and introductions/conclusions in the document itself help you to make highly readable reports that make it easy to reproduce what you did years ago.

colmvp 2 days ago 1 reply      
> Immersion means 100% focus. If possible, no friends, no drinking, no TV, just reading and writing code. If you take five minutes off to read the news, be aware you are breaking the mental state of immersion. Stay focused, be patient, your mind will adapt. Eliminate all distractions, of which you may find doubt to be the loudest. Immersion is the difference between success and failure.

Certainly, I think Deep Work require full concentration. So when in the mode of learning, I find keeping focus instead of going to a website to read news, or checking e-mail/messages to be incredibly important in maximizing the incremental process of grasping concepts.

That being said, whereas the author seems to prefer taking a few months to go deep into it, I prefer to immerse myself over a long period of time by learning and practicing a few hours per day (just like an instrument), letting my mind stew in the knowledge during diffuse thinking periods, and then come back to it the next day.

AndyNemmity 2 days ago 1 reply      
I'm 36 and learning how to be a real programmer. Was a Linux Admin, and an architect for my career. Did presales, and became an expert at a lot of different roles within the field.

Never was truly a developer, and decided I wanted to accept a job as one. I've programmed in the past, how hard can it be?

Wow, it's been enlightening. Really hard. I thought it would be straight forward since I've used scripted quite a bit in perl in my past, but being a developer is much more than writing a few scripts to automate a task.

I'm a few months in now, and I am still slower than all my colleagues by quite a bit, and the main language I'm working in has changed already, moved from Python to Go.

Even right now, I'm stuck on an issue around pointers and data structures that feels like it should be easy, and I'm just not getting it.

All you can do is keep confidence up, and keep at it. Immersing in it, and knowing that irrational levels of effort will lead to results.

I thought it would be easier though :)

makmanalp 2 days ago 0 replies      
Every time I see stuff like this I think of Grandma Moses, an accomplished artist who started painting at 78: https://en.wikipedia.org/wiki/Grandma_Moses
alexee 2 days ago 2 replies      
My father is 59 and started to learn programming half a year ago. So far I was giving him algorithmic tasks to learn basic language constructs, he is now comfortable with basic Java and is able to solve most of easy problems from programming contests.And idea where to go from here? I don't think solving more difficult problems (like that involving algorithms or creative thinking) would make sense at this point. I tried to give him simple GUI project (tick-tac-toe in Swing), this kind of worked with lots of my help, but of course it was badly designed with model-view mixed, and he is unable to understand design pattern concepts at this point.
paul7986 2 days ago 1 reply      
At 31 I took my savings for my house, quit my robotic customer service job and started a startup. I worked on my 1st startup for three years and along the way taught myself front end development and design. Which I now do for a living.

I say startup and if it fails like 80 to 90% due you gained an in-demand skill that you can use to make a nice living.

partycoder 2 days ago 1 reply      
"Learning to code" is somewhat vague.

The "Sorites paradox" is something like: how many grains of sand form a heap? if you remove or add one, is it still a heap?

So, exactly what exactly makes you a programmer? that varies a lot depending on who you ask. Someone said a programmer should be able to detect and report a bug to a hardware manufacturer. Some others say that "learning" (partially, because most programmers don't know every single aspect of a programming language) a general purpose or Turing-complete language makes you a programmer.

I define an "X programmer" where X is backend, frontend, data, whatever... as someone who can not only implement a feature, but do it through understanding rather than through a heuristic of trial and error or reusing code. Also, a person that is able to troubleshoot what is going on if some of the underlying systems is not working as expected.

sonabinu 2 days ago 0 replies      
I started in my 30s after an earlier stint in high school. It was a real struggle. I work in a SE engineering role now with a focus on data science. My stats and math skills have given me an advantage but I still feel like I'm a rookie in many ways. It is important for more of us who transition to SW careers to speak about our struggles and techniques to hang in there. It will render confidence to those who feel alone as they try to find their footing.
hamersmith 2 days ago 0 replies      
Going from not working in the industry to leading a team of developers in just a few years is extremely impressive. I have over a decade of experience as developer and have not made it yet to that kind of lead position. Is this because your technical skills were superior to your peers or because you possessed additional soft skills, if so, what advice would you give for moving into Lead Developer/Engineering Manager roles?
cafard 1 day ago 0 replies      
I learned to code at 18. I did not fall in love with programming: this owed at least in part to Fortran IV, punch cards, and a Burroughs mainframe that was often under maintenance. But I coded a craps game simulation, and passed.

I relearned to code at 31 or so. There was data over here that I needed in a different format over there, and didn't care to retype. I taught myself some minicomputer assembler from the instruction set reference. At that same job, I learned to write macros in the OS's command-line interpreter. I found that I enjoyed programming. And I went back to school.

That was a while ago, long enough that the second or third language that I learned on my own was Perl 4. I would never have called myself a ninja or a rockstar. Yet I have over the years written some very useful code.

ptr_void 2 days ago 0 replies      
As student trying to make sense of job space and prospects, there's just too many statements that gets posted on the internet that seems to contradict each other.
dzink 2 days ago 1 reply      
You need more stories like this to show people who wouldn't normally consider CS as a viable, lucrative path to a second career. Areas with high unemployment and people in dwindling old industries may get a second wind in life if they tried his approach. A big change like this also requires multiple exposures to the currently much easier to reach CS education as a possible solution, so I hope more people produce accessible content like this.
jordache 2 days ago 1 reply      
Is a full stack person still realistic with today's web technologies?

I mean to build up expert level skillset, you'd have to really dedicate your self into learning the particularities of not just languages but also their runtime environments.

Unless you have no life, and only sleep, eat, code, or super intelligent, being able to absorb and stay current with everything.....

Other than that, I just don't see the full stack mentality working

cr0sh 2 days ago 1 reply      
A possibly similar tale is the one being done by some former Kentucky coal miners:


digi_owl 2 days ago 0 replies      
I have found that the problem i have with learning programming is not the logic of it, but of memorizing and internalizing all the functionality provided by the standard lib etc.
logingone 1 day ago 0 replies      
What I found recently of someone who switched from another career to programming is not that they struggled with programming so much but that they struggled with the environment. I had the misfortune of working with an ex-lawyer, two years of programming experience. Hell. He also lacked the ability to have any sort of interesting conversation about programming as he had no background to reference.
kulu2002 2 days ago 0 replies      
I learnt C, C++, Shell scripting gnu makefile creation directly on project. When I did my degree I only knew C just for sake of passing. I was directly exposed to writing device driver for I2C and SPI the very first day and someone just dumped a 1GB of technical junk on my PC which include some APIs of RTOS I was supposed to work on!But I would say that that was really a steeeep learning curve... I am amazed and surprised today when I look back from where I started 13 years back :)
skocznymroczny 2 days ago 0 replies      
I read this as "How I learned to code in 30s" and I thought it'd be a parody of "Learn X in Y" tutorials.
kodepareek 2 days ago 0 replies      
I started learning to code when I was 31. Though I did have an engineering degree, but I learnt basically nothing after getting into engg school. Spent most of the 4.5 years worrying whether I was smart enough for this to do this and setting myself up for very dismal results.

Became an advertising copywriter after college and spent 7 years in the copy mines. It was truly a profoundly uninspiring industry (though I continued to doubt myself and never really got to where I wanted to and should have)

Founded a startup with a friend hoping for a fresh start. Took forever to find a developer so in some strange moment of overconfidence (sanity?) I decided I would take a shot at it and started learning Python. Found myself hypnotized by the codeacademy course and knocked it off in 3 days or less.

Some a few started programs then a developer friend came on board as an advisor and told me to pick up Django. In a few months (with him and another good friend doing all the heavy lifting) I got enough into the thing to be able to scrape data, make API calls and develop the admin interface.

With everything I learnt I found a block of that constant self doubt melting away. I had never felt so capable and in control in my entire life.

Startup wound up though and I had to take a job at a design agency. Though I picked up the basics of HTML and CSS there most of my work was managing clients (aarghh) Left in a few months as a writer at this startup working part time.

But within a month of me joining the CTO quit and the company was in massive flux. I just stepped forward and said I would code. The other developers happily took the help and I got my first job as programmer. The next 1.2 years were just full days of writing scripts to automate our workflow and figuring out this danged JS, Node thingy (which I really love now btw)

When this place wound up too and I studied React, now have a big 6 month project at this company helping them automate their workflow with an admin app. Am writing the fullstack code, all by myself. Which is so exciting and empowering.

Programming is awesome. It's my one advice to anyone who asks me for advice these days. It changed my life completely. From being a constantly depressed and volatile guy I am now fairly confident and really rare to anger.

Surprise bonus, I have become far more creatively productive after leaving the creative industry and have written a bunch of songs (that I don't hate) and also started learning to play the Piano, something I always wanted to do.

Next up is Algos and Data Structures the next time I have enough saved for a 3 month immersion. I really do think they are super important. Plus picking up a new language. Suggestions welcome.

chirau 2 days ago 1 reply      
So do bootcamps teach data structures and algorithms?
maggotbrain 2 days ago 1 reply      
Reading that makes me glad to be a network engineer. Ethernet, BGP, and OSPF don't change all that much. I am all for learning the latest Python, NetMiko, NAPALM stuff for network automation. This article reads like masochism.
thinkMOAR 2 days ago 0 replies      
The title implies as if you are ever 'finished' with learning to code, anybody thinking about starting, this is a lie, it's a never ending road :)
mattfrommars 2 days ago 0 replies      
I'm facing problem of finding a mentor and space I want to succeed is being able to do anything with power of Python!
sAbakumoff 2 days ago 1 reply      
2017 : codecamps produce an army of amateurs that make interent of shit.
CognacBastard 2 days ago 0 replies      
This is great advice for someone learning to break into the coding world.
lhuser123 2 days ago 0 replies      
Good inspiring story
minademian 2 days ago 0 replies      
contains a lot of real advice. the sharing of experiences and insight into his process makes this piece really great.
commenter1 2 days ago 2 replies      
LordHumungous 2 days ago 0 replies      
It's not that hard jeez
Judges refuse to order fix for court software that put people in jail by mistake arstechnica.com
329 points by kyleblarson  23 hours ago   43 comments top 10
wonderwonder 22 hours ago 4 replies      
Obviously a case of not enough wealthy people being affected. If the poor are falsely imprisoned its just business as usual and they lack the clout to hire an aggressive talented attorney. This will likely rectify itself as soon as a substantially wealthy individual is imprisoned improperly.

Just a continuation of the sad state of our legal system where punishment is not so much an issue of guilt but of wealth or more specifically the lack thereof.


rrggrr 22 hours ago 1 reply      
>Even if there was standing, the plaintiffs did not establish that they would suffer harm or prejudice in a manner that cannot be corrected on appeal. They also fail to show that they lack an adequate remedy at law, as they may move for correction of erroneous records at any time, the 1st District continued.

Civil and criminal Courts are intentionally blind to the cost, suffering and disruption the process inflicts. It should be obvious that filing an appeal and moving to correct records is expensive at a minimum, and hugely disruptive to people struggling to survive, support families, etc. It may be easier and cheaper to simply do time for alleged offenses, innocent or not.

Its appalling. @wonderwonder's comment is on target.

Overtonwindow 19 hours ago 0 replies      
"the public defenders office has filed approximately 2,000 motions informing the court that, due to its reportedly imperfect software, many of its clients have been forced to serve unnecessary jail time, be improperly arrested, or even wrongly registered as sex offenders."

There's the metric. Maybe there's a law firm that's willing to sue on behalf of all of these people. Surely the hassle of that lawsuit would push them to change, and those defendants would have standing.

andrewla 22 hours ago 0 replies      
https://news.ycombinator.com/item?id=13069775 was posted as the nature of the problem began to occur, and has many interesting discussion on the underlying problems involved.
downandout 18 hours ago 0 replies      
The law does allow for all of these people serving extra days in confinement to seek financial damages for each day. It seems to me like a civil attorney could file boiler plate lawsuits for each of these people, since the underlying facts in each case are nearly identical. Besides the money that both the attorney and his clients would enjoy, that would be by far the fastest route to getting this fixed. It will stay broken until it hits the county in the wallet.
baybal2 21 hours ago 1 reply      
In any normal jurisdiction, a prima facie mistrial would've resulted in an automatic disqualification and disbarment of a presiding judge.

America admits no legal liability of court employees over anything except gross miscariage of justice, and even for such cases Americans invented insane legal theories that let few judges walk away from charges ranging from corruption and bribery to selling "freedom for blowjob"

Push for personal liability of judges in mistrials and violations of court protocols

slang800 19 hours ago 0 replies      
Actual arguments used by the judge: https://www.documentcloud.org/documents/3514379-Order-Denyin...

From a brief reading, it seems like the complaint is delays/errors in document entry by clerks (like updates to probation terms or bail postings) and that the search interface isn't connected with other databases.

Just sounds like confusing software, or users that haven't been trained to use it. However, it's not clear if it's causing more or less clerical errors than the last system they had.

hvo 20 hours ago 0 replies      
How about if the two names of sitting judges are added to the list,one appellate judge and one supreme court judge. I am confident the software will be fixed.
WCityMike 17 hours ago 0 replies      
As a legal assistant in Illinois, which will start using this software statewide by the end of the year (excluding Cook and some other counties, which adopted other software), this is troubling news.
Why Is NumPy Only Now Getting Funded? numfocus.org
355 points by numfocusfnd  2 days ago   106 comments top 25
rjbwork 2 days ago 2 replies      
We have this problem in the .NET world. Accord.NET is written by a brilliant academic and programmer. It's well written, and has a good API, but it is largely the effort of this one dude, with minor contributions from a smattering of other fellows.

Again, it is great in general, but it has bugs and rough edges here or there, and a lot of people don't trust it for production. I wish there was a way for people to be properly compensated for building and maintaining such vital scientific and mathematical computing software.

jordigh 2 days ago 3 replies      
Also a problem for Octave. Remember this?


It also stings a little when people say that it's completely obsolete because Matlab itself is "legacy" and we should all be abandoning the language, Octave included... and yet, even though I like numpy and Python and matplotlib and Julia and R, I still find myself reaching for Octave whenever I need a quick visualisation of some data.

js8 2 days ago 1 reply      
"The problem of sustainability for open source scientific software projects is significant."

Yeah, William Stein can tell these stories too: http://sagemath.blogspot.cz/2015/09/funding-open-source-math...

wodenokoto 2 days ago 1 reply      
The "Every successful science library has the ashes of an academic career in the making" quote has been mentioned several times in the comments, so I thought I would give a plea to everybody who works in academia to help the people who build the foundational tools of your research by citing them in your papers:


Radim 2 days ago 2 replies      
Is the idea that "foundational work" (in any field) can be done without "huge sacrifices" widely accepted?

It sounds a tad unrealistic to me, unsupported by history.

It's as if people want to have it both ways: Create innovative SW, but also don't take risks or make sacrifices.

Offer software "for free" (and belligerently oppose even something like GPL), but also get paid (preferably by the government, so the people actually footing the bill have no say in it) and be long-term sustainable.

What's next: get paid, but also don't pay income taxes? Give away project control, but also keep it? :)

All understandable desires, but a little schizophrenic.

Disclaimer: I am a big fan of open source and NumPy in particular. I mentor students and OSS newcomers, I even pay one full-time dev to work only on OSS. It's just that I try not to kid myself about where the time&money comes from and where it goes, and I try not to have random people pay for my hobbies.

Extremely relevant previous HN conversion on this topic:


danjoc 2 days ago 4 replies      
>the entire scientific Python stack was essentially relying upon on the free time work of only about 30 peopleand no one had funding!

30 people? I remember a time when a certain fruit company would enter a field, literally hire all 30 of those guys, and put them behind closed doors. Then in 2 years they'd dominate the field for the next decade.

Are these guys turning down offers? Or is the fruit company that poorly managed now?

sandGorgon 2 days ago 1 reply      
Because developers generally don't know (or don't like) the outreach necessary to fundraise.

For example in numpy case - https://github.com/numpy/numpy.org/issues/9That's a request in March 2017 to add a donation button to the website. Im not sure that 6 months back, if Numpy was legally structured to receive larger funding.I posted a similar comment (with many more replies) in the context of Octave and it's funding https://news.ycombinator.com/item?id=13604564

Tl;Dr Don't ask for donations - instead sell "gratitude-ware"

There are tons of people who WANT to support these projects, but you have to make it easy and accountable to do that. The best example that I usually give is Sidekiq.

@mperham is awesome that way "This is exactly why I disclosed my revenue: people won't know there's a successful path forward unless it's disclosed. I want more OSS developers to follow my lead and build a bright future for themselves based on their awesome software."

In fact, I believe there's a start-up to be done here. "Stripe Atlas for Open Source software"

projectramo 2 days ago 1 reply      
Q: Are we conflating two issues?

Is there a difference between the "sole developer problem" and the "lack of funding" problem.

I mean, even if a project finds funding, does it follow that it will attract more talented developers?

One way to distinguish the two issues is to look at for-profit software. In the cases where there is one primary developer, do they find it easy to keep the software going when the person retires?

I ask this because, I think, beyond the very real monetary issue, there is a question of how development works. Do we need one very talented individual who does the lion's share of the lifting?

travisoliphant 2 days ago 1 reply      
Original NumPy author here. I have a lot to say on this topic, given that it has literally consumed my life over the past 20 years. You can go here for some thoughts about some of this: http://technicaldiscovery.blogspot.com/ There are several articles there that relate but in particular http://technicaldiscovery.blogspot.com/2012/10/continuum-and... and http://technicaldiscovery.blogspot.com/2017/02/numfocus-past...

I knew what I was getting into when I wrote NumPy. I knew there was not a clear way to support my family by releasing open source software, and I knew I was risking my academic career.

I did it because I believed in the wider benefit of ideas that can be infinitely shared once created and the need for software infrastructure to be open-source --- especially to empower the brightest minds to create. I did it because others had done it before me and I loved using the tools they created. I hoped I would inspire others to share what they could.

There have been a lot of people who have helped over the years. From employers willing to allow a few hours here and there to go to the project, to community members willing to spend nights and weekends with you making things work, to investors (at Continuum) willing to help you build a business centered on Open Source.

There are many people who are helping to fix the problem. In 2012, I had two ideas as to how to help. Those who know me will not be surprised to learn that I pursued both of them. One was the creation of NumFOCUS that is working as a non-profit to improve things. The second was the creation of Continuum (http://www.continuum.io) to be a company that would work to find a way to pay people to work on Open Source full-time.

We have explored several business models and actually found three that work pretty well for us. One we are growing with investors, a second we are continuing with, and another we are actually in the process of helping others get started with and ramping down on ourselves.

Along the way, I've learned that open source is best described in the business world as "shared R&D". To really take advantage of that R&D you need to particpate in it.

We call our group that does that our "Community Innovation" group. We have about 35 people in that group now all building open-source software funded via several mechanisms.

We are looking for people to help us continue this journey of growing a company that resonantly contributes significantly to Open Source as part of its mission. If you are interested, contact me --- I am easy to track down via email.

ssivark 2 days ago 0 replies      
There is a long-standing problem in open source software, which is that there is no "business model" associated with funneling resources to people putting significant effort into it. Setting up a consulting business to monetize software creates the perverse incentive to make software harder to use, but there seem to be some examples where this model has worked out reasonably.

Open source projects are typically started by people working in the field, who have a strong urge to scratch some itch. Even if we find a way to find money for them to work full-time, they often don't have the desire to "productize" software, or to create/nurture/govern an organization around bringing together different stakeholders who might be able to use, or contribute to the software. (We got really lucky with Linus+Linux)

ArneBab 2 days ago 0 replies      
Another project which is easy to overlook: Think about how many scientists use Emacs for most of their development and writing. But there is (to my knowledge) not even a single paid developer working on it.

( http://gnu.org/s/emacs )

Q6T46nT668w6i3m 2 days ago 0 replies      
I believe Im one of (or near) the top 30 contributors (Ive made substantial contributions to all of the aforementioned packages), and Im funded to write scientific software. Im extremely fortunate. Unfortunately, like so many things, I suspect it has everything to do with pedigree (e.g. my lab, my institution, my peers, etc.) rather than my (or my coworkers) exact contributions. In fact, I dont know if any of my labs grants has ever explicitly mentioned our contributions to one of the discussed packages. However, this could change. Im extremely encouraged, for example, by the comments from new institutions like OpenAI or the Chan Zuckerberg Initiative about the necessity of funding software.
mschaef 2 days ago 0 replies      
It's surprising to me that people are surprised by this.

Even setting aside the fact that the people that can do this work are few in number, the vast majority of people need a way to support themselves and their family. If the number of people that have these skills is low, the subset that is both altruistic enough to donate them for a sufficient period of time and personally able to do so must be vanishingly small. (And the negative feedback a lot of OSS maintainers receive doesn't help.)

Companies have the same issue... there has to be a fairly direct connection between an expenditure (paying developers) and a return on that investment. That can be a very difficult argument to make.

mtmail 2 days ago 1 reply      
If your open source project needs funding there is https://opencollective.com/opensource (currently waiting, I'm not affiliated)
santaclaus 2 days ago 1 reply      
It blows my mind that NumPy is just getting funding. How did the Eigen (used in TensorFlow, among other things) folks keep it going?
teekert 2 days ago 1 reply      
I tried at work to get some money to the maintainer of iRAP, an RNA sequencing analysis pipeline we depend on heavily at the moment. But business sees this as wasted money, it's there for free, why not take it? Reading this, I think I'm going to double down on my efforts again. We get so much value out of a huge pile of FOSS software, we should be donating. Meanwhile we have spend piles of money on Matlab for years and we aren't even allowed to run Linux on our Laptops if we wanted to.
mattfrommars 2 days ago 2 replies      
Ok, so the problem seems to be 'lack of maintainer' or could be stretched to contributer. The article later linked to https://www.slideshare.net/NadiaEghbal/consider-the-maintain... which kind of reminded me a problem which I'm facing.

After getting through basic of "Introduction to Computer Science Using Python" and forever pending goal to become a "Python Developer", is anyone here who is experience in Python willing to be my mentor? In return, free Python labor. :)

largote 2 days ago 0 replies      
Because it's an important tool for Machine Learning, which makes money from that (of which there's plenty going around right now) flow into it.
jeremynixon 2 days ago 0 replies      
This is still an important problem for numerical compute in other languages. It's a struggle to do data analysis and write machine learning applications in Scala, Java, C++, etc. due to a lack of Numpy / Pandas style ease of use and functionality.
abousara 2 days ago 3 replies      
Each country have its own taxes to sustain fundamentals, same thing must apply in software industry. Software engineers might be paid +100k$ while developing over open source languages/frameworks or libraries. That's is not fair, it is like riding a starving horse.

A French point of view (after all, France invented VAT...) would suggest to introduce a taxe on software engineers salaries (1% ?) and redistribute this fund on most used languages/frameworks/libraries and use a part to sustain a new projects.

alannallama 2 days ago 0 replies      
This is exactly the problem Open Collective[0] exists to solve. Often times, there are people who want to financially support a given open source project, but there is no channel by which to do so. Creating the financial channel is the first step toward a much needed culture change where the assumption is you will support the open source you rely on, especially if you're making money off it.

[0] http://www.opencollective.com

prewett 2 days ago 1 reply      
I thought that Enthought sponsored a lot of NumPy development, kind of like as a corporate caretaker or something, is that not the case?
kem 2 days ago 0 replies      
I appreciate this article being posted, and have the utmost respect for NumPy developers. The urgency and discrepancy between use of certain important open-source libraries, and their support, is bewildering sometimes.

As I was thinking about it, though, I'm not surprised NumPy hasn't been funded before. The reasons why say a lot about biases in memory.

It wasn't that long ago that the sorts of things NumPy does were seen as fairly niche, and in the domain of statistics or engineering. It's only with relatively recent interest in AI and DL that this has been seen as within the purview of Silicon Valley-comp sci-type business, as opposed to EE or something different. I still am kind of a little disoriented--the other day, looking through our university's course catalog, I realized that certain topics that would have been taught in the stats or psychology departments are now being seen as the territory of comp sci. Statisticians have written excoriations about being treated as if they don't exist, as comp sci blithely barrels forward, reinventing the wheel.

I'm not meaning to take sides with these issues, only pointing out that I think the world we live in was very different not so long ago. It might seem puzzling that NumPy hasn't had more funding, but I think that's in part because what it's most profitably used for now wasn't really seen as much more than academic science fiction not too long ago.

The other part of it too, is that until relatively recently, if you were to do numerical heavy lifting, you'd almost certainly be expected to do that in C/C++ or maybe Fortran. There's a tension in numerical computing, between the performance and expressiveness that's needed, and Python is on one end of that continuum, far from the end that is traditionally associated with complex numerical computing. Sure, you had things like MATLAB with Python in the same functional role, but those were largely seen as teaching tools, or something that engineers did for one-off projects, having learned to do that in school (I still think the use of python in ML derives from the use of Python as a teaching tool in uni).

I'm not trying to knock Python or NumPy or anything, just kind of trying to convey a different perspective, which is that I can remember a time not too long ago when the use of Python in numerics was seen as primarily didactic in nature, or for limited circumscribed applications.

FWIW, it seems to me Python is kind of on a path similar to what happened with javascript, which was treated as kind of an ancillary helper language on the web, until Google started pushing its limits. Then there was browser wars 2.0, and huge efforts put into javascript, and it became a main player in network computing. To me, there's a similar trend with Python: it really kind of existed as a language for prototyping and scripting tasks, and now finds itself in a different role than it has been used for traditionally, and projects in that area are getting an influx of money accordingly. What I see happening is (1) a blossoming diversity of numerical computing communities (Haskell, Python, Julia, Kotlin, Scala, Rust, Go, etc.), due to competition and variation in application scenarios and preferences, (2) a huge influx of resources being put into Python to make it more performant, or (3) people jumping ship from Python into one of those other platforms to get more bang-for-the buck [or (4) some combination of all of these.]

anigbrowl 2 days ago 0 replies      
Because capitalism is an inherently exploitative economic paradigm?
carapace 2 days ago 0 replies      

> "And if youd like to take action to contribute to project sustainability, consider becoming a NumFOCUS member today."


How to read and understand a scientific paper: a guide for non-scientists lse.ac.uk
312 points by ingve  3 days ago   58 comments top 20
neutronicus 2 days ago 1 reply      
I have a little "hack" that I find extremely helpful for getting a sense of specific research fields.

Journal articles, even review papers, are cramped for space and so tend to be very dense. The author suggests methods for doing battle with this density, but I suggest that, before doing that, you search for a class of document that's allowed to be as expansive as the author desires, and whose authors have recently struggled to learn and understand their content, and so tend to be expansive:

PhD Theses

Find out what research group published the research, find out which graduate students have recently graduated from that group, and read their theses (if the author's command of the language of publication isn't what you'd prefer ... find another graduate student). I guarantee you it will function much better as an introduction to what the group does than trying to parse any of their journal publications. In particular, the "draw the experiment" step will often be solved for you, with photographs, at least in the fields where I've done this.

startupdiscuss 2 days ago 3 replies      
This is a good guide, but I will tell you a trick that is faster, easier, and more effective:

read 2 or 3 papers.

All that effort you would put into doing these steps? Instead, read 1 or 2 other papers that the author refers to in the beginning.

Science is a conversation. When you read the other papers, even if you don't understand them at first, you will get a sense of the conversation.

Also, some writers are abysmal, and others are amazingly lucid. Hopefully one of the 3 papers you read will be the lucid one that will help you understand the other 2.

closed 2 days ago 1 reply      
I love how simple and clear this post is.

As a kind of weird aside, if anyone ever emailed me about any of my journal articles, I would 100% respond to them (assuming they weren't a machine). I think most of my colleagues would do the same (except for articles featured in a newspaper, which might garner a lot of weird emails).

lumisota 2 days ago 0 replies      
Keshav's "How to Read a Paper" [1] is a good guide, though perhaps less in the "for non-scientists" camp.

[1] http://ccr.sigcomm.org/online/files/p83-keshavA.pdf

choxi 2 days ago 0 replies      
> As you read, write down every single word that you dont understand. Youre going to have to look them all up (yes, every one. I know its a total pain. But you wont understand the paper if you dont understand the vocabulary. Scientific words have extremely precise meanings).

That's a great tip. I've found that a lot of papers aren't necessarily complicated, but the vocabulary is unfamiliar (but you experience the same sense of confusion with both). It's interesting that we often conflate complexity with unfamiliarity, my reading comprehension abilities improved quite a bit by understanding the difference between the two.

glup 2 days ago 2 replies      
I don't understand the opposition to abstracts: dense means high information content, so if you know the field you can learn a whole lot (like whether you should read this paper or another one).
ChuckMcM 2 days ago 0 replies      
Oh this is awesome, well presented and clear.

A couple of notes, generally if you email the author of a paper they will send you a copy. Scholar.google.com can be used to evaluate the other papers referenced, highly cited ones will be 'core' to the question, less highly cited ones will address some particular aspect of the research.

For any given paper, if it cites one or two seminal papers in the field, you can build a citation cloud to create what is best described as the 'current best thinking on this big question'. You do that by following up the citations and their citations for two or three hops. (kind of like a web crawler).

With something like sci-hub and some work on PDF translation, it should be possible to feed two or three 'seed' papers to an algorithm and have it produce a syllabus for the topic.

olsgaard 2 days ago 0 replies      
About identifying "The Big Question", I have a story from my days as a graduate student, where I failed to do so.

I was asked to help on a project that needed to identify humans in an audio stream. During my literature review, I came across the field of "Voice Activity Detection" or VAD, which concerns itself with identifying where in an audiosignal a human voice / speech is present (as opposed to what the speech is).

I implemented several algorithms from the literature and tested it on the primary tests sets referenced in papers and spend a few months on this until I finally asked myself "What would happen if I gave my algorithm an audiostream of a dog barking?"

The barking was identified as "voice".

As it turns out, the "Big Question" in Voice Activity Detection is not to find human voices (or any voices), but to figure out when to pass on high-fidelity signals from phone calls. So the algorithms tend to only care about audio segments that are background noise and segments that are not background noise.

deorder 2 days ago 0 replies      
I usually first start reading or glance over papers (and non-story books) from the end to the beginning before I read it the other way around. This has the following benefits for me:

- By knowing about the conclusion first I will better understand the motivation and why certain steps are being taken.

- I find out sooner if the paper (or book) is something I am looking for.

I like to read papers unrelated to my field to learn new thing to apply. To be honest, some papers still take me a long time to understand because they usually assume you already are researching the topic (for ex. certain terms, symbols and/or variables that are not being defined).

nonbel 2 days ago 4 replies      
There is a difference between reading and studying a paper. Many papers I just check the abstract for claims of A causes/correlates B (ie it is a "headline" claim), and look for a scatter plot of A vs B (it is missing).

Then I do ctrl-F "blind" (can't find it), ctrl-F "significance" (see p-value with nearby text indicating it has been misinterpreted). Boom, paper done in under a minute. There is really no reason to study such papers unless they have some very specific information you are searching for (like division rate of a certain cell line or something).

sn9 2 days ago 1 reply      
>I want to help people become more scientifically literate, so I wrote this guide for how a layperson can approach reading and understanding a scientific research paper. Its appropriate for someone who has no background whatsoever in science or medicine, and based on the assumption that he or she is doing this for the purpose of getting a basic understanding of a paper and deciding whether or not its a reputable study.

Better advice intended to make layman with zero background in science become more scientifically literate would be to tell them to read some textbooks.

Later on in the article, she tells people to write down each and every thing you don't understand in an article and look them up later. And this is excellent advice for people with a background equivalent to an advanced undergraduate or higher, but for people with zero background it would be better to read some textbooks and get yourself a foundation.

Honestly, even when I was in grad school in neuroscience, I asked around for advice on reading papers and the surprisingly universal response from other grad students was that it took 2 years to become reliably able to read and evaluate a research paper well. And this is 2 years in a research environment with often weekly reading groups where PIs, postdocs, grad students, and some undergrads got together to dissect some paper. These reading groups provided an environment in which you had regular feedback on your own ability to read papers by seeing all the things those more experienced than you saw and that you missed. A paper that took me 3+ hours of intense study would take a postdoc a good half hour to get more information out of.

I feel like this article makes reading articles well seem a lighter undertaking than it really is. It's really no wonder we see studies misinterpreted so often on the internet, where people Google for 5 minutes and skim an abstract.

DomreiRoam 2 days ago 0 replies      
I would like to have a digest or an overview written for a IT practitioner. I did go SC/IT conference and enjoyed the talks and I noticed 2 things: 1/ You learn new things and new approach that can bring value to our job 2/ It seems that the research sector discover stuff that are already known in the industry.

I think it would be great to have a journal/blog that would construct a bridge between the industry and the university.

kronos29296 2 days ago 0 replies      
As a student who needs to read research articles for my project, this article gave some new ideas on how to approach those long boring and cryptic pieces of text that just take days to understand. Thanks to the person who posted it.
luminati 2 days ago 0 replies      
A couple things I try to do when reading research papers, inspired by these two amazing [b|v]logs.[1]https://blog.acolyer.org/[2]https://www.youtube.com/user/keeroyz

I try to paraphrase the paper into a Acolyer like 'morning paper' blog post on evernote while mentally I am directing a 'two minute paper' video on the paper :)

yamaneko 2 days ago 0 replies      
This suggestion by Michael Nielsen is also very good: https://news.ycombinator.com/item?id=666615
pitt1980 2 days ago 0 replies      
What's odd to me, is that lots of professors have blogs in which they write quite a bit in plain language that doesn't require an instruction manual in order to be read
syphilis2 2 days ago 4 replies      
Why don't the authors do these 11 steps for us?
amelius 2 days ago 0 replies      
I'd like an answer to: how/where to ask the relevant community a question about a scientific paper.
minademian 2 days ago 0 replies      
this is a great guide. i wish more writing on the Internet has this blend of substance, message, tone, and grit.
apo 2 days ago 8 replies      
Sensible advice overall, but I completely disagree with these:

> Before you begin reading, take note of the authors and their institutional affiliations.


> Beware of questionable journals.

Institutional affiliation and journal imprimatur should have no bearing in science. These are shortcuts for the lazy, and they introduce bias into evaluation of the paper's contents.

Even more than that, dispensing advice along these lines perpetuates the myth that scientific fact is dispensed from on high. If that's the case, just let the experts do the thinking for you and don't bother your pretty little head trying to read scientific papers.

If the author's approach to reading a paper only works by checking for stamps of approval, maybe the approach should be reconsidered.

Stupidly Simple DDoS Protocol (SSDP) Generates 100 Gbps DDoS cloudflare.com
341 points by riqbal  1 day ago   98 comments top 12
majke 1 day ago 2 replies      
Author here. Allow me to extend the post a bit. It turns out that about 2.4% of the IPs that respond to SSDP queries, do so from a weird port number! For example:

 IP > UDP, length 95 IP > UDP, length 249
The first packet is SSDP M-SEARCH query. The second is a response from my printer. Notice - the source port for the response is not 1900 (but the dst port is okay). I'm not sure what the spec has to say about it, but it's pretty weird. What's worse - these responses won't be matched against "sport=1900" DDoS mitigation firewall rule.

I'm not sure what is the moral here. But if you ever see some UDP packets from a weird port, to a weird port - maybe it's this SSDP case.

hueving 1 day ago 1 reply      
More casualties from BCP 38 failures. This article mentions it but then dilutes the importance of it by suggesting SSDP is a problem. If IP spoofing did not work on the Internet, none of these UDP reflection attacks would work.

A scheme to strong arm the adoption of BCP 38 is key to stopping these attacks from growing. IoT has shown us that expecting device updates to disable these UDP protocols is a lost battle.

upofadown 1 day ago 6 replies      
>It's not a novelty that allowing UDP port 1900 traffic from the Internet to your home printer or such is not a good idea.

How would this even be possible? Home routers have to NAT everything. Normally you have to set up reverse NAT to get ports forwarded to the LAN.

voltagex_ 1 day ago 1 reply      
It will be years and years until those vulnerable miniupnpd versions are updated. Most are in embedded devices which will never see another update.

I'm glad to see miniupnp is still in active development: https://github.com/miniupnp/miniupnp but I can't work out if it's set to be vulnerable by default.

thomasdereyck 1 day ago 0 replies      
Shameless plug: When I read about SSDP a little while ago I was curious to see if I'd encounter it on many networks. As I was also trying to learn Swift/Apple development, I've written two (non-free) little apps for macOS/iOS to monitor SSDP messages:



Ever since creating it and just checking on some networks, I'm surprised of how many devices are actually using it. I probably saw this in Wireshark before as well, but probably overlooked it because you're never really looking for it. I wonder if many other such protocols are often used but easily missed...

saurik 1 day ago 3 replies      
> Internet service providers should never allow IP spoofing to be performed on their network. IP spoofing is the true root cause of the issue. See the infamous BCP38.

I don't see how it is at all reasonable to shift blame from a protocol that assumes the world can be trusted to the untraceable goal of "every single network in the entire world should only generate trusted data: then the problem would be solved".

> Internet providers should internally collect netflow protocol samples. The netflow is needed to identify the true source of the attack. With netflow it's trivial to answer questions like: "Which of my customers sent 6.4Mpps of traffic to port 1900?". Due to privacy concerns we recommend collecting netflow samples with largest possible sampling value: 1 in 64k packets. This will be sufficient to track DDoS attacks while preserving decent privacy of single customer connections.

OMFG. Do you want deanonymization attacks? Because this is how you get deanonymization attacks :/. The right form of solution here is not to encourage ISPs to log even more of our traffic (a practice I wish were illegal), but to try to kill off UPNP through every form of leverage possible (even if it breaks things).

I'd say this is "so disappointing", but I guess I shouldn't expect much from the company that tried its damndest to argue that nothing of importance was leaked from Cloudbleed even when you could still recover Grindr requests complete with IP addresses that they had managed to leak well after they tried to claim that data had been scrubbed :/.

gbrown_ 1 day ago 3 replies      

 More on the SSDP servers Since we probed the vulnerable SSDP servers, here are the most common Server header values we received: 104833 Linux/2.4.22-1.2115.nptl UPnP/1.0 miniupnpd/1.0 77329 System/1.0 UPnP/1.0 IGD/1.0 66639 TBS/R2 UPnP/1.0 MiniUPnPd/1.2 12863 Ubuntu/7.10 UPnP/1.0 miniupnpd/1.0 11544 ASUSTeK UPnP/1.0 MiniUPnPd/1.4
What an earth is internet facing and running 2.4 Linux kernels?

everdayimhustln 1 day ago 2 replies      
Pervasive IoT device deployment without in-the-wild security considerations and rapid updates is likely to add to DDoS bot farms.
bsder 1 day ago 4 replies      
Why is IP spoofing STILL an issue? Why?
walterkobayashi 22 hours ago 1 reply      
Is it possible that SSDP Protocol can be run on a non-standard port (10000 - 65535) ?
IE6 1 day ago 0 replies      
So only slightly faster than GNU yes
dsl 1 day ago 4 replies      
It is unfortunate that CloudFlare shared enough PoC code to weaponize this.

Edit: for the downvoters, this isn't just my opinion, please read https://en.wikipedia.org/wiki/Responsible_disclosure

2D Syntax racket-lang.org
359 points by mr_tyzic  1 day ago   82 comments top 23
glangdale 1 day ago 1 reply      
I just love this. For some reason, our ways of specifying what we want a computer to do (I don't want to say 'language') remain mired the same territory we started in with punched cards (which I actually got to use as a 10 year old, which was fun).

I'm not sure if this particular cut is the right idea, but it's good to see experimentation. A bias I have here is that I think these ideas should be rigorously separated from the concept that some sort of WYSIWYG editor will allow non-programmers to code.

nprescott 1 day ago 0 replies      
Very neat, I really appreciate the Racket community's willingness to experiment with syntax.

Looking at the examples reminds me of Julian Noble's "Elegant Finite State Machine" in Forth[0], which takes a different approach to the same problem of creating a language to better specify a problem (in both cases graphically).

[0]: http://galileo.phys.virginia.edu/classes/551.jvn.fall01/fsm....

ziotom78 1 day ago 0 replies      
I find this extremely interesting! In the last weeks one of my colleagues had to work to some legacy application that was developed using National Instrument's LabView [1]. For those who do not know it, it is a visual language to develop interfaces to scientific instruments. Everything is done visually, including ifs and for loops.

My colleague, which has large experience with languages like C# and Assembly, is extremely frustrated by this way of working. Everything must be done using a mouse, and even the simplest tasks require some thought in order to be implemented properly. (Although I must say that he praises LabView's hardware support and its Visual Basic-like easiness in developing GUIs.)

I find Racket's 2D syntax to be far more promising than LabView's approach:

1. You can code it using a text editor: unlike LabView, no mouse is required;

2. Only a few classes of statements are affected by this (LabView forces you to do everything visually, even function definitions and mathematical operations);

3. You use this feature only if you think it helps; otherwise, plain text syntax is always available.

As a side note, I would like to give kudos to the Racket developers for this kind of gems. Racket really seems to be a language which makes language experiments easy to implement and try!

[1] http://www.ni.com/en-us/shop/labview.html

hyperion2010 1 day ago 1 reply      
It looks like this (e.g. `#2dcond`) implements a way to directly embed other languages in a racket file [0] and avoids the problems encountered when trying to do it using the `#reader` syntax [1] in a source file. Essentially letting you have multiple readtables in a file (probably not nestable though). I could be wrong about this (need to look more carefully when I have more time), but nonetheless could allow direct embedding of completely alternate syntax with the right setup.

[0] https://github.com/racket/2d/blob/master/2d-lib/private/read...[1] https://docs.racket-lang.org/guide/hash-reader.html

jacobparker 1 day ago 3 replies      
Nicely done!

Different but similar joke for C++: http://www.eelis.net/C++/analogliterals.xhtml

kazinator 1 day ago 1 reply      
I have some reservations about how this is designed.

All we need are columns labeled with conditions. We don't need rows. And the matrix can just have true/false/don't-care entries, with code assigned to rows.

Concretely, say we have these conditions:

 (> x y) (stringp foo) (oddp n)
Right? Okay, so now we can identify the combinations of these and assign them to code like this:

 (> x y) (stringp foo) (oddp n) #t #t (whatever) #t #t (other-thing) #t #f (etc)
There could be a way to mark some of the rows as having "fall through" behavior. If they match, the expression is evaluated (for its side effects, obviously), but then subsequent rows can still match.

This could be worked into a straightforward S-exp syntax without any diagramming shennanigans:

 (table-cond (> x y) (stringp foo) (oddp n) #t () #t (let [...] (whatever)) () #t #t (other-thing) #t #f () (etc))
Here, don't cares are denoted using (). Something else could be chosen.

A #f entry means "must be explicitly false". A blank column entry is a "don't care"; that condition is not taken into account for that row.

hota_mazi 1 day ago 2 replies      
While I appreciate and respect Racket's willingness to experiment and innovate, I'm a bit puzzled by this.

Tables are neat to read but pretty annoying to write, especially in ASCII form. It's true that code is read much more often than written, but still, I wonder how useful this really is.

sharpercoder 1 day ago 0 replies      
Tables are generally a very good idea for languages. SpecFlow/cucumber as most notable example, but I can see others benefitting greatly as well.
b123400 1 day ago 0 replies      
It reminds me of Funciton, https://esolangs.org/wiki/Funciton
fao_ 1 day ago 2 replies      
The reason why I think that this will not thrive, as other projects have not thrived, is because (at least initially) it adds to the mental burden. Scanning a cond for me is almost instantaneous. It took me a couple of very long seconds for me to figure out what was happening within that table, and even though I recognized it as a truth table, it was not easy to read. The information was too separate on screen to easily compare.

I think that were programming initially presented as such, this would not be a problem, but I expect that many developers are so finely attuned and specialized to text that other methods will not take off purely because of the learning curve.

ooqr 1 day ago 0 replies      
Surprisingly exactly what the title makes it sound like. Very cool!
agumonkey 1 day ago 0 replies      
Reminds me (again) of Jonathan Edwards research. He did many editors with tables as first class so that you can avoid boolean nestings, and simplify verification / closure of your boolean mappings.

He published a video, sadly in flash http://www.subtext-lang.org/subtext2.html

Here's an article about "schematic tables" http://aigamedev.com/open/review/schematic-table-conditional...

joshlemer 1 day ago 0 replies      
If anyone wants to see some ascii-art in Scala, they just need to look at the Akka Streams-graph api (http://doc.akka.io/docs/akka/current/scala/stream/stream-com...)
vidarh 1 day ago 0 replies      
I love attempts at visual programming. As a kid I used to pour over the then-fashionable ads for CASE (Compater-Aided Software-Engineering) tools in DDJ and elsewhere and imagined them to do far more than they actually did... Also attempts like Amiga Vision [1]

One of the software engineers I like to go a bit fanboy-ish about is Wouter van Oortmerssen, who I first got familiar with because of Amiga E, but who has a number of interesting language experiments [2], one of which includes a visual language named Aardappel [3] that also used to fascinate me.

There are a number of problems with these that have proven incredibly hard to solve (that this Racket example does tolerably well on, probably because it doesn't go very far):

1. Reproduction. Note how the Amiga Vision example is presented as a video - there is not even a simple way of representing a program in screenshots, like what you see for the examples of Aardappel, which at least has a linear, 2D representation. That made Amiga Vision work as a tool, but totally fail as a language. This is even a problem for more conventional languages on the fringe, like APL, which uses extra symbols that most people won't know how to type. The Racket example does much better in that it can be reproduced in normal text easily.

2. Communication. We talk (and write) about code all the time. Turns out it's really hard to effectively communicate about code if you can't read it out loud easily, or if drawing is necessary to communicate the concepts. Ironically, if you can't read the code out easily, it becomes hard for people to visualise it too, even if the original representation is entirely visual. This example does ok in that respect - communicating a grid is on the easier end of the spectrum.

3. Tools. If it needs special tools for you to be effective, it's a non-starter. This Racket example is right on the fringes of that. You could do it, but it might get tedious to draw without tooling (be it macros or more). On the other hand the "tool" you'd need to be effective is limited enough that you could probably implement it as macros for most decent editors.

I spent years experimenting with ways around these, and the "best" I achieved was a few principles to make it easier to design around those constraints:

A visual language needs a concise, readable textual representation. You need to be able to round-trip between the textual representation and whatever visual representation you prefer. This is a severe limitation - it's easy to create a textual representation (I had prototypes serialising to XML; my excuse is it was at the height of the XML hype train; I'm glad I gave that up), but far easier to make one that is readable enough, as people need to be able to use it as a "fallback" when visual tools are unavailable, or in contexts where they don't work (e.g. imagine trying to read diffs on Github while your new language is fringe enough for Github to have no interest in writing custom code to visualise it; which also brings up the issue of ensuring the language can easily be diffed).

To do that in a way people will be willing to work with, I think you need to specify the language down to how comments "attaches" to language constructs, because you'll need to be able to "round-trip" comments between a visual and textual representation reliably.

It also needs to be transparent how the visual representation maps to the textual representation in all other aspects, so that you can pick one or the other and switch between the two reasonably seamlessly, so that you are able to edit the code when you do not have access to the visual tool, without surprises. This makes e.g. storing additional information, such as e.g. allowing manual tweaks to visual layout that'd require lots of state in the textual representation that people can't easily visualise very tricky.

Ideally, a visual tool like this will not be language specific (or programming specific) - one of the challenges we face with visual programming, or even languages like APL that uses extra symbols, is that the communications aspect is hard if we can not e.g. quickly outline a piece of code in an e-mail, for example.

While having a purely textual representation would help with that, it's a crutch. To "fix" that, we need better ways of embedding augmented, not-purely-textual content in text without resorting to images. But that in itself is an incredibly hard problem, to the extent that e.g. vector graphics supports in terminals was largely "forgotten" for many years before people started experimenting with it again, and it's still an oddity that you can't depend on being supported.

Note that the one successful example in visually augmenting programming languages over the last 20-30 years, has been a success not by changing the languages, but by working within these constraints and partially extracting visual cues by incremental parsing: syntax highlighting.

I think that is a lesson for visual language experiments - even if you change or design a language with visual programming in mind, it needs to be sort-of like syntax highlighting, in that all the necessary semantic information is there even when tool support is stripped away. We can try to improve the tools, but then we need to lift the entire toolchain starting with basic terminal applications.

[1] https://www.youtube.com/watch?v=u7KIZQzYSls

[2] http://strlen.com/programming-languages/

[3] http://strlen.com/aardappel-language/

lispm 1 day ago 0 replies      
Adding tabular display to Lisp is useful, IMHO. I was thinking about using something similar, but mostly where code looks like data or specifically for Lisp data. I would not be very interested to use tables for control structures, but would like more support for auto-aligned layout for control structures.
kovek 1 day ago 0 replies      
I wonder how nesting of tables would be like? I guess if you had function calls inside, it would look nicer than drawing a table inside a table.
niuzeta 1 day ago 0 replies      
Okay. I love this and everything about this.

> This notation works in two stages: reading, and parsing (just as in Racket in general). The reading stage converts anything that begins with #2d into a parenthesized expression (possibly signaling errors if the and and characters do not line up in the right places).

I'm cracking up, oh my god.

nemoniac 1 day ago 0 replies      
Neat idea. It draws directly from the approach to learning functions of multiple complex arguments in www.htdp.org

Looking forward to 3D syntax for functions of 3 arguments.

GregBuchholz 1 day ago 1 reply      
Yes, and next I'd love to see big parentheses:

 if > (+ a b) case x cond (- c d) (1 'foo) ((> y 2) 'quux) (2 'bar) (t 'error) (3 'baz) 
((http://imgur.com/oI0zVm3) if that isn't rendering for your setup)

oh_sigh 1 day ago 2 replies      
What editor do racketeers commonly use? I like the idea, but this seems like a burden for code editing in most normal editors except for perhaps emacs picture mode.
igravious 1 day ago 0 replies      
Nobody has mentioned the dependently type prog-lang Epigram yet?

Epigram uses a two-dimensional, natural deduction style syntax, with a LaTeX version and an ASCII version. Here are some examples from The Epigram Tutorial:


The natural numbers

The following declaration defines the natural numbers:

 ( ! ( ! ( n : Nat ! data !---------! where !----------! ; !-----------! ! Nat : * ) !zero : Nat) !suc n : Nat)
The declaration says that Nat is a type with kind * (i.e., it is a simple type) and two constructors: zero and suc. The constructor suc takes a single Nat argument and returns a Nat. This is equivalent to the Haskell declaration "data Nat = Zero | Suc Nat".

Project lives here: https://code.google.com/archive/p/epigram/ and the last commit on https://github.com/mietek/epigram2 is five years ago which leads me to believe that the project is abandon-ware.


kv85s 1 day ago 0 replies      
ebbv 1 day ago 2 replies      
This is not a good idea. That code is gross and illegible. There's definitely better ways to handle anything where you think this is the solution.
Moderate drinking associated with atrophy in brain related to memory, learning washingtonpost.com
275 points by tuxguy  2 days ago   234 comments top 30
SubiculumCode 2 days ago 2 replies      
In my phd work I was heavily involved in hippocampal segmentation, and I can say with confidence that FSL FIRST is not a state-of-the-art segmentation method. It belongs to an earlier generation of segmentation methods with poorer reliability, which have contributed to a lot of contradictory results in my field of hippocampal development. I would not use it in my research.




[edit]I had meant to point a link to my chapter on hippocampal development.https://www.researchgate.net/publication/314194708_Hippocamp...

startupdiscuss 2 days ago 6 replies      
But they define "moderate" as 2 drinks/day for a man.

That is 14 drinks a week which puts you in the second highest decile for Americans!


codyb 2 days ago 4 replies      
Alcohol is getting a pretty bad rap to me the more I read about it. After reading this I was curious about ways to _increase_ hippocampal function. According to [0] it seems like exercise can increase hippocampal function. Another link I can't find again since I found it on my phone and am now on my laptop indicated things like learning languages, and an omega 3 rich diet can also help.

This is good news for me since NYC is very much a drinkers town and I enjoy partaking, but am also learning Italian, exercising more, and frequently eat with omega 3s in mind.

Here's to hoping they cancel out!

[0] - https://www.nature.com/nature/journal/v472/n7344/full/nature...

ACow_Adonis 2 days ago 3 replies      
As a stats man, before I read the article I said to myself: "bet it's from a survey".

With that kind of methodology, I imagine the "moderate" drinkers are going to contain the group of people who said they (frequently) drink, but not that much. I'd be willing to bet there will be a relationship between reporting that you drink frequently and the negative effects of alcohol that will outweigh the attempt to self report how much you drink (because the latter won't be reported accurately, but people who don't drink will tend to more accurately report and select themselves out into another category. Not only is it easier to self report whether you are a non drinker/drinker than it is to report you are a moderate/heavy drinker, I imagine there's also another confounding effect coming through given that there generally has to be something exceptional about you to be a non-drinker in our societies.

Self-disclosure: practically a non-drinker, nothing ideologically against it. Might have a beer every two months socially with food.

Thoughts? Especially from anyone reading the actual study?

hamstercat 2 days ago 1 reply      
I'd be interested to know what 3 times more mean. If your chance of brain damage is 0.5% normally and jumps to 1.5% with moderate drinking, that isn't too bad. If it jumps to 10% to 30% that's another story.
fludlight 2 days ago 1 reply      
Why not link to the actual study?


scottLobster 2 days ago 3 replies      
So just for reference, 14 units of alcohol (the low-end correlated to atrophy) is approximately:

7-9 (US) shots of hard alcohol (assumed 37.5% ABV)7 pints of Lager (assumed 4% ABV)9.3 125 ml glasses of average strength wine (assumed 12% ABV)

Looks like my 2-5 bottles of beer a week habit is fine. :)


hellofunk 2 days ago 3 replies      
I guess you need to decide what is important to you. For example, a few years ago, this was published [0]:

> One of the most contentious issues in the vast literature about alcohol consumption has been the consistent finding that those who don't drink tend to die sooner than those who do


richieb 2 days ago 0 replies      
As a comedian once said "They say that alcohol kill brain cells. Yeah, but only the weak ones!"
nickledave 2 days ago 0 replies      
Haven't seen anyone else comment on this specific statement from the abstract: "No association was found with cross sectional cognitive performance or longitudinal changes in semantic fluency or word recall." Based on other people's comments, seems like the main finding here is that maybe the hippocampus shrinks as we get older and maybe there's an effect on lexical fluency. We already know that, even in "healthy" subjects, the brain shrinks due to aging, and I think no-one can say yet how much atrophy can take place before it impacts memory or learning.
water42 2 days ago 0 replies      
this was posted earlier in the month and most of the top level comments have already been made


carbocation 2 days ago 2 replies      
A Mendelian randomization study that I find convincing suggests that there is no obvious safe level of alcohol intake from a cardiovascular standpoint.

This observational study linked by tuxguy points in the same direction, and it seems ripe for follow-up with genetic work that could support (or help refute) the likely direction of causality.

Cyph0n 2 days ago 9 replies      
I'm always interested to see how people react to a news article or study that criticizes alcohol. I have noticed over time that alcohol consumption is usually a taboo subject to discuss for some reason, and almost everyone I talk to immediately gets on the defensive when I say that I don't drink and never will. Can anyone shed some light on why exactly people don't like it when someone doesn't drink, especially when it's for health reasons?

Anyways, I'm not surprised that most of the comments here are either outright defensive or are just proposing ways to undermine the content of the study. Reading through the comments, arguments include: the one size fits all "correlation is not causation", ad hominem attacks on WaPo itself, half-jokingly accusing the article author of drinking, and arguing the semantics of what constitutes a "drink".

mortenjorck 2 days ago 10 replies      
Study finds correlation. Article mentions caveat that correlation is not causation. Headline explicitly states causation.

This is certainly a very interesting correlation, and demands further study. Perhaps there is a causal link. But it's just (predictably) irresponsible journalism to head such a piece with a factually incorrect headline.

devoply 2 days ago 2 replies      
Well why wouldn't it. Alcohol is not a health food. It crosses the blood brain barrier and is toxic to cells. So yeah put toxic stuff up there, there will be some consequence.
zelos 2 days ago 0 replies      
Did they consider that it might not be the alcohol itself, but correlation with behaviour? People who come home from work and have a drink or two are much less likely to do something that keeps their brain active.
protonfish 2 days ago 0 replies      
My concern about this study is that it used a large number of different measures: multiple structural brain measures and cognitive tests, then swept the negative ones under the rug and reported only positive correlations. This is like the XKCD jelly bean test https://xkcd.com/882/ There is no reason to accept these types of results without replication.
spdionis 2 days ago 1 reply      
Yeah but what's the point of having a good memory if you can't have a drink?
kmm 2 days ago 1 reply      
I wonder if there's a difference between acute and chronic alcohol consumption of the same amount. Drinking 14 units in one event every 2 weeks amortizes to one unit per day, but would it have the same effect?
nashashmi 2 days ago 0 replies      
Reminds me of what a nurse once said: wet brain syndrome, or being drink even when you are sober.
KekDemaga 2 days ago 1 reply      
My personal rule still stands "All substances that impair in the short term, when used daily, impair in the long term."
clubm8 2 days ago 0 replies      
What if I drink fourteen drinks once every 4 to 6 months?
tlholaday 2 days ago 1 reply      
Meanwhile, President Trump is a lifelong teetotaler.
accountyaccount 2 days ago 0 replies      
sure, that's the point
mothers 2 days ago 5 replies      
would you give a 5 year old a beer? no? exactly. everyone knows drinking is bad for you. they do it anyway, because "reasons" [1]anyone who disagrees is just being irrational.

[1] which may include partying and hooking up, as well as removing any awkwardness they have.

melling 2 days ago 0 replies      
Maybe now we can put to rest the "coding while drunk" discussion that occurs on HN from time to time:


B1narySunset 2 days ago 0 replies      
What about binge drinking on the weekend?
coldtea 2 days ago 0 replies      
>Even moderate drinking causes atrophy in brain area related to memory, learning

Well, that's for the better. Seeing that I drink to forget.

TimMurnaghan 2 days ago 5 replies      
Seriously, stop linking to sources that block adblockers.As far as I'm concerned they've taken themselves off the web.
mothers 2 days ago 1 reply      
MITs gas-powered drone is able to stay in the air for five days at a time techcrunch.com
283 points by janober  2 days ago   171 comments top 24
projectramo 2 days ago 7 replies      
At first I thought the kicker was going to be: the gas is helium, it just floats.

And then I thought: why is that a joke? Could you save fuel keeping it aloft with a lighter than air gas and just use the fuel to move it around?

emmelaich 2 days ago 3 replies      
Just goes to show there's nothing remotely like fossil fuels for energy density.
Abishek_Muthian 2 days ago 1 reply      
To be frank, for the non-Americans who read the article; the first thought would be that the drone is actually using some form of 'gas' and not liquid petroleum. I presume 'gas' in this context means gasoline and not LPG.
jobu 2 days ago 3 replies      
That's neat, but not very revolutionary. The Rutan Voyager flew for 9 days (around the world) without refueling back in the 80s - https://en.wikipedia.org/wiki/Rutan_Voyager
rdiddly 2 days ago 2 replies      
Well they inadvertently proved once again just how energy-dense and how incredibly ridiculously efficient liquid petroleum fuels are and what a boon and a curse they've been to humanity during this brief one-or-two-century blip, and how difficult it will be to replace them with anything that could even come close to current levels of energy output/consumption.

(That's a problem statement, not an ad.)

pj_mukh 2 days ago 1 reply      
Confused. There seem to be plenty of drones that can stay up there for 120 hours or so [1]. Presumably, their solar additions should be able to do even more?

[1]: http://www.airforce-technology.com/features/featurethe-top-1...

arikr 2 days ago 0 replies      
Does anyone know where the US Airforce posted the challenge?

Is there a good repo for all of the US Govts challenges/requests (e.g. DARPA ones + others like USAF?)

flyGuyOnTheSly 2 days ago 0 replies      
Very impressive.

I was quick to look up how long The Spirit Of Butts' Farm [0] was in the sky for, only 36 hours.

In 5 days, assuming MIT's model held a similar speed, it could probably circumnavigate the globe.

[0] http://news.nationalgeographic.com/news/2002/08/0805_020805_...

dougmany 2 days ago 1 reply      
Curious how they used gpkit for the design, I found this: http://gpkit.readthedocs.io/en/latest/examples.html#simple-w...
jacquesm 2 days ago 1 reply      
There is no way a gas powered drone will help bring internet access to some remote area.

Why do these press releases always include the most farfetched goals?

rebootthesystem 2 days ago 1 reply      
I'm sorry to have to be negative on this. I look at a bunch of this stuff that is DOD/Armed Forces funded and all I have seen for decades are nothing less than what I would call scams. Total waste of taxpayer money. And this isn't universities only, it's companies that win stupid-as-fuck "research" grants and produce crap in return.

There are a few companies out there playing with what are nothing more than ridiculously expensive toy RC planes (< 6 ft wingspan) that almost anyone could build out of parts available in the open market.

This MIT thing is nothing less than an overgrown model airplane. It uses what looks like a stock 4 stroke model airplane engine with a stock model airplane propeller. It probably uses stock model airplane servos, electronics and 2.4 GHz TX/RX.

Oh, wait, advanced materials. Nope. I have been designing and building RC airplanes as a hobby for three decades. I was vacuum-bagging large (12 foot wingspan) glider wings from fiberglass, carbon fiber and also Kevlar twenty years ago. I still have and use my vacuum bagging setup to build planes today. I have also made custom carbon fiber propellers by CNC machining aluminum molds and stuffing them with epoxy impregnated carbon fiber rovings.

So, this is what I would call "advanced hobby" stuff.

Special airfoils? Nah. Any serious RC glider pilot who has built planes knows what airfoils to use. Not a secret. These guys didn't do anything special on that front.

Oh, but it can be taken apart and stuffed into a box for FedEx shipping. Guess what, so can any large scale RC glider. Nothing new there. Make the wing in six foot sections and you are golden.

Yeah but...

Yeah but nothing.

Here's a guy flying an OFF THE SHELF 6 meter wingspan RC glider. What do you think the wings are made from?


That's nearly 20 feet. The MIT "drone" has a 24 foot wingspan.

Here's another guy flying a 6.5 m, 21 ft motorized glider:


Replace the electric motor with an internal combustion engine and a fuel tank and bingo. Again, off the shelf.

And here are a BUNCH of guys with huge gliders, powered and not, including one that has a 51 FOOT wingspan. And, look, no truck for launch, they use another large scale RC airplane to tow it up to altitude.

Here's a list of a bunch of large scale RC sailplanes you can buy off the shelf today.


Here's an 8 to 8.8 meter (~26 to 27 ft) DG 1000 S you can buy and build for about $5,000 Euro:


Shame on MIT for being a part of a scam. It's embarrassing.

PatentTroll 2 days ago 4 replies      
I remember when we used to call them model airplanes.
averagewall 2 days ago 4 replies      
So many fascinating inventions seem to be supposed to be used for search and rescue. We've had jetpacks, robot drones, this drone, robot walkers, crashable drones, even the Boston Dynamics Atlas walking robot is for search and rescue.

Is there actually a need for so many robots? Even if there is, is the market big enough to make economical sense? It seems like it's just an easy thing to imagine when you can't think of a real purpose for your robot.

HockeyPlayer 1 day ago 0 replies      
SkyFront (https://skyfront.com/) builds hybrid drones than fly 10 times as long as battery only drones.
mmmrtl 2 days ago 0 replies      
Too bad the project is (presumably) slowing down now that Warren Hoburg has been selected as an astronaut.
jmelloy 2 days ago 2 replies      
So when asked to build a solar-powered airplane, they ... built a gas-powered one?
faragon 2 days ago 1 reply      
The future: ultra-light drones with 90% of its weight being batteries of charging during day time, so they can be operative at night. Flying 24/7, forever (e.g. zeppelin-like).
jhallenworld 2 days ago 0 replies      
We need an efficient way to use solar to create gasoline.
penetrarthur 2 days ago 0 replies      
The less fuel is left, the lighter the drone.
balozi 2 days ago 6 replies      
I guess that when the vehicle isn't helping deliver communications to areas impacted by natural disasters or other emergencies, the Air Force will strap an AGM-114 Hellfire package on it and have it linger over combat zones for days.

I hope these college kids are bright enough to recognize that they are building killing machines unlike anything warfare has seen before.

ge96 2 days ago 0 replies      
Damn that's like a DLG on steroids
andybak 2 days ago 2 replies      
Gas in the American colloquial sense? Or actual gas?
wentoodeep 2 days ago 0 replies      
Equip it with Neural Net ML and it will find it's way to reach ground in no time for at least 10,000 times.
honestoHeminway 2 days ago 0 replies      
They used a refill service drone?
Sorting 2 Tons of Lego, Many Questions, Results jacquesmattheij.com
313 points by darwhy  2 days ago   204 comments top 36
tuna-piano 1 day ago 2 replies      
I'm begging you, please post a full-speed video that shows a seemingly random assortment of pieces sorted into boxes with identical color/size/shape pieces. I think it would be insanely satisfying for many people to watch. I would literally grab popcorn and watch it for hours (and I think many others would too).

Given your current setup, it might be easiest to just take one bin of identical pieces and then sort it by color into various bins. I'm salivating over what this would look like.

If you posted that video, I bet it would rack up millions of YouTube views.

tzs 1 day ago 1 reply      
> The next morning I woke up to a rather large number of emails congratulating me on having won almost every bid (lesson 1: if you win almost all bids you are bidding too high)

That reminds me of the economist George Stigler's observation: "If you never miss a plane, you're spending too much time at the airport".

cfeduke 1 day ago 4 replies      
A few years ago a friend and I started work on a similar machine for sorting Magic: the Gathering cards. We weren't using any machine learning (this was beyond our capabilities) but our design was similar, except the bins were on a moving platform which slowed down as the weight of the cards in bins increased - due to the cheap parts/weakness of the motors we were using. Unfortunately it occupied an entire room in my house and was never completed and ultimately dismantled.
munificent 1 day ago 2 replies      
If, like me, you find photography of piles of sorted LEGO pieces hypnotically soothing, you will probably like this album I put together:


A while back, I sorted my childhood LEGO pile before sending it to my brother. 73 pounds of LEGO sorted by hand. I'm sure it says things about how strangely wired my brain is, but this was an eminently enjoyable endeavor.

JackC 1 day ago 1 reply      
This series is the best thing ever.

An idea: I have a three-month-old daughter who I'll definitely be buying Lego for when she gets old enough. What I'll be looking for is whatever's the least valuable in your collection -- big bins of cheap mixed stuff to see what she invents. Maybe that would be a cool side output for your project? Cheap chaff packs for kids of hackers ...

tinco 2 days ago 4 replies      
Not a fanatic builder at all, been almost two decades since I built something with Lego, but really surprised to see that the categories are not color separated.

If you're building any esthetic Lego project would have color restrictions. Also I imagine some colors are worth more than others, just because they're used in more projects.

Also, color separated Lego bins just look pretty.

marze 1 day ago 1 reply      
Why not go big? Write up a business plan that involves buying 100 tons of lego, making a bigger machine with 200 bins off the belt, or whatever, and making a new brickowl / bricklink site of your own. I'm sure you could find investors, you've got great publicity already. And you could propose it as a general sorting AI (or web sale of small item) firm in the long term, with the lego being your initial application.

You'd need to solve the inverse problem of how to efficiently go from bin to packing container, which doesn't seem too hard. Rotating carousels of bins with little robots or something. Set the item on a tray, and if isn't the right item, puff it back into the bin.

tzs 1 day ago 2 replies      
Are there CAD models available for the various Lego components? I'm wondering if, given such models, you could use rendering software to generate labeled training images. That could allow you to get a large training set with many images for each kind of component in many different orientations, even for components you do not yet have.

If the CAD models are not available, how much work is it to make a CAD model given an instance of a particular component? I'd expect that for some components, such as those NxM plates, you'd only need to build one model by hand in CAD software, and then could algorithmically generate the models for other sizes.

BillFranklin 2 days ago 1 reply      
Jacques, John Tooker might be able to help you with the sorting issue: http://lego.jtooker.com/sorting/.
throwanem 2 days ago 0 replies      
I actually just gave away all my Lego to a colleague whose three sons are much better able to appreciate it than I am, and (because!) it's been quite a few years since I actually built anything.

But, that said, were I looking to buy parts, I'd hesitate to do so in bulk by these categories, unless the price were very good indeed - between the inability to match a parts list to a purchase order, and the extensive manual labor required to sort parts sold by mass at a useful degree of granularity, they'd need to be very cheap in order to make the purchase worth my while.

(On the other hand, most of my builds back in the day were spacecraft of various fictional types, and I heavily favored complexity, which means a large number of small parts, designing in wiring channels for LED lighting, and the like. I'm not sure how representative my comments here are of any other style.)

ChuckMcM 1 day ago 1 reply      
At some point you can completely reverse entropy and re-create the original sets :-) But in terms of just selling off excess Lego looking at the mix of the builder sets would let you figure out a sort of SKU you could put together and sell on Ebay or else where as a 'builder pack 1', 'builder pack 2' etc.
pveierland 2 days ago 3 replies      
One idea for selling the parts would be to keep a precise inventory of everything entered into the system. A precise inventory + a database of the parts needed for any LEGO set would allow customers to order an arbitrary set for a nice price. A total system which would take random parts as input, and output packaged complete LEGO sets would be neat.
jhassell 1 day ago 2 replies      
On commercial pecan farms, the nuts need graded based on color, size, and variety. Currently, pecans are tumbled in rotating chutes composed of grates of increasing spacing, so that pecans are sorted based on size. Color and variety are not filtered, ultimately resulting in a lower sales price for the farmer. Your device could take this type of sorting to the next level.
mabbo 2 days ago 1 reply      
With the machine working so well, you could run each sorted batch through it again, and sort into very specific pieces/colours, if you wanted.

You just need to re-train on those classes instead- but I bet the front-end of the tensorflow network would already be trained well enough that re-training for new classes would be very fast.

nappy-doo 2 days ago 1 reply      
I've thought about this machine a number of times since the previous post. In that post, there was a movie showing the machine's operation, but it was operating at a slow speed, and he's mentioned multiple times that there's been speed improvements.

Does anyone know if there's a realtime movie anywhere?

phaedrus 17 hours ago 1 reply      
Something I've wanted to try for years is what about a Lego refraction distillation tower? Similar to how oil refineries use a tower to sort hydrocarbon compounds by length based on their volatility. Except in this case instead of using heat we would use air blowing upward in a long tube to make the pieces "volatile". Picture a 6" diameter clear tube stretching floor to ceiling with a powerful blower at the base. Or a smaller version could be prototyped using an air popcorn popper with the heating element disabled.

Essentially it would sort pieces according to their free-fall terminal velocity. You could exploit the Venturi effect to make the bottom of the tube have a higher velocity than the top, or simply add bleed-air holes at progressive levels. There could be take-off doors at different levels of the tube. A nice property is that if the ingress and egress mechanisms are designed elegantly, it could run continuously.

(Although, that last point is one reason I've never built it: it seems like there would be a fine line between "air powered sorter" and "air powered rock tumbler" and I worry it could wear the corners of all my Lego bricks if it goes wrong.)

Another potential issue is that there might not be much logical relationship between the pieces which share similar terminal velocities, and different pieces may have different or metastable terminal velocities depending on initial orientation.

Obviously this wouldn't be a perfect sorter, but it could be a good pre- or post- sorter for your visual system. And for my purposes it might work well since instead of adapting the system to suit my sorting preferences I could simply train my sorting expectations based on what I learn the machine tends to sort together.

Hmmm now that I have an industrial dust collector installed in my workshop, I actually already have the bulkiest / most expensive part of this invention. I just need a tall acrylic tube and I could try this!

jkingsbery 2 days ago 2 replies      
My five year daughter old mostly just wants to build castles, so if there was a less-expensive way to get large number of 2-peg width bricks (especially in pink and purple, which she reminds me daily are her favorite colors), I could see that being useful.
justinpombrio 1 day ago 1 reply      
> Overall the machine works well but Im still not happy with the hopper and the first stage belt. It is running much slower than the speed at which the parts recognizer works (30 parts / second) and I really would like to come up with something better.

It sounds like your existing hopper is reliable but slow? If that's the case, could you just make two (perhaps somewhat smaller) hoppers and merge their outputs?

Marazan 1 day ago 1 reply      
I suspect Mobile Frame Zero ( http://mobileframezero.com/mfz/ ) players would be delighted to get a source of the more esoteric bricks that go into their builds
Gravityloss 1 day ago 1 reply      
In part 1 you mention that under vibration the bricks self assemble into bridges that obstruct the conveyor belt, and you want to minimize that.

How about maximizing it instead? Try to build bridges as long as possible? Or some other fitness function?

In theory you could control the vibration sequence and the part feeding sequence (once you have sorted all the parts :) ) and use an evolutionary algorithm...

zitterbewegung 2 days ago 1 reply      
I wonder if Jaques tried to do dropout to increase the efficiency? Also if he made the trained neural net available you could made a lego identifier app.
arnsholt 1 day ago 0 replies      
> Judging by the increase in accuracy from 16K images to 60K images there is something of a diminishing rate of return here.

This is a well-known thing in machine learning. As far as I know all ML algorithms have this property, where the learning curve is basically logarithmic and each additive increase in performance requires multiplicatively more training material.

wand3r 1 day ago 0 replies      
I have a suggestion but am not a Lego guy. Maybe you can do a few things now that the sorter works, potentially using it to automate or minimize order fulfillment.

The idea is that if you can get a dataset of Lego packs / models you can use your inventory to make them. You can offer sales of the kits based on dynamic info. You can also provide inventory dynamically to users to purchase parts or upload a build manifest which will be filled.

I think the 2 huge assets you have are sorting for processing AND fulfillment, and real-time inventory levels of verified authentic parts.

Also, I would make some sort of contest, auction fun game where a person can win the whole pile of discarded stuff. Idk what's in there but I bet someone would want it.

sanj 1 day ago 1 reply      
I would pay good money to send you a big box of Lego from mixed sets and have it come back sorted into one bag per set.

You'd probably need the set numbers so that it'd be easier to choose which set to associate a part with.

I'm more than happy to be a beta customer!

stuaxo 1 day ago 0 replies      
Now you have a bunch of pieces each with different classifications.

Possibly you could feed in parts lists from a huge amount of existing sets - have some sort of (handwavey learning thing or even markov chain), and generate mixes of pieces based on data sets from spaceships / cars / etc.

pulse7 1 day ago 0 replies      
You can really earn money by modifying this machine to sort various nuts and fruits by quality in the food industry...
marze 1 day ago 0 replies      
On speeding up your feed, how about adding another belt prior to the main belt, with a variable speed?

You drop parts onto the variable speed belt from enough sources to get a good feed rate, and constantly vary the speed of that belt with video feedback to drop them onto the main belt with reasonably even spacing.

punnerud 1 day ago 0 replies      
Why not sort Lego the same way we sort plasic in Norway?Watch this video. Drawing of the technique at 0:35 and shown from 0:52https://youtu.be/Mi0FHNLkim0
codewithcheese 1 day ago 1 reply      
I think the sorted photos are great art. Maybe more valuable than the contents ;)
Poiesis 1 day ago 1 reply      
Technics and larger wheels are useful for Lego robotics (FLL) teams everywhere; certainly useful for our team.

Edit: this is in response to your asking for feedback about categories/demand.

mherrmann 1 day ago 1 reply      
You should extend it so it can assemble the pieces ;)
tuna-piano 1 day ago 1 reply      
Has Lego approached you at all? If they don't have similar in-house technology, I could see this adding value to them.
jokoon 1 day ago 1 reply      
Impressive, but I'm still wondering what 2 tons of lego look like.
unityByFreedom 1 day ago 1 reply      
Pretty cool to keep seeing follow-ups on this.

Jacques, any plans to publish your models or image dataset somewhere?

andresgottlieb 2 days ago 3 replies      
You should use https://www.cloudflare.com/. It's free and very easy to implement.
Show HN: Hackerhunt categorised curation of Show HN submissions hackerhunt.co
324 points by degif  2 days ago   65 comments top 32
degif 2 days ago 4 replies      
Hey HN!I'm one of the two developers behind Hackerhunt. As much as I love Hacker News and it's ranking algorithm for the front page news, it has a downside for the Show HN submissions. A lot of cool and useful stuff people have actually made themselves gets lost in /shownew without a real chance to get to the right audience. That's where the idea of a curated and categorised, -la-Product-hunt, list was born.

This is a very early proof of concept and any suggestions on how to make it better are welcome!

superasn 2 days ago 0 replies      
Really cool site. I recently asked HN[1] why did the free alternative to PH die and found this postmortem thread[2].

I think it really boils down to making sure that people coming to your site find something new and creative all the time - to help turn lurkers or one-time visitors into repeat visitors. I think PH does that quite well with their podcasts, daily digests, twitter updates (though they are forced but they do work), etc. Also you're building a community site so if the traffic dies in a month keep at it, it looks sites like these take many years to gain that traction. Basically I think you have something really great going here, just make sure to focus on bringing the visitors back and you will definitely have a winner!

[1] https://news.ycombinator.com/item?id=14584527

[2] https://news.ycombinator.com/item?id=11233967

andrewjrhill 2 days ago 1 reply      
This is great. You have a small (albeit fun) bug when hitting next page though: http://i.imgur.com/eqTVBPy.png :)
lamby 2 days ago 1 reply      
I like how Hackerhunt is the most popular item on Hackerhunt:


ruiramos 2 days ago 0 replies      
Well done! I'd just add a way of suggesting tags for the submitted projects to make them more useful.
prawn 2 days ago 0 replies      
Really like the design and agree that the idea serves a purpose. Must be really discouraging to brave the Show HN and have it fall flat.

Wondered if maybe having the list for today, then perhaps some other recent options in a slimmer format either beside or below?

Nice work though!

brimstedt 2 days ago 0 replies      

A feature ive been missing on hackernews, that perhaps you'd be willing to add, is a community written tldr for each link.

I.e apart from title and link, a short (200chars or so) description anf tldr.

jacquesm 2 days ago 0 replies      
Are you aware of the thread of threads?


overcast 2 days ago 0 replies      
If you can nail down the categorization, get some more historical stuff, and maximize that newsletter or just suggestions. This should be a great utility.
veli_joza 2 days ago 2 replies      
Looks great. Can you tell us how you built it? I'm most interested in automatic categorization of submitted articles.
fiiv 2 days ago 0 replies      
Hi! Just thought I'd report a bug - when you search, and try to click the comments icon to go to the HN post, it goes to https://news.ycombinator.com/item?id=undefined
sauravt 2 days ago 1 reply      
This is very useful, as a frequent HN user, I find myself strolling down the showHN tab quite often, and the current UI doesn't let you go more than 3 pages deep (~5-6 days old posts).
kryptogeist 1 day ago 0 replies      
That's really cool! Thanks for making this website, it's absolutely useful, it might save a lot o projects. I can tell that because I have myself posted on Show HN and my submission never made it past the /shownew. Probably because it was not that interesting to HN's audience, but I can imagine how many really cool projects end up buried in there.
oblio 2 days ago 0 replies      



Hee-hee :D

dang 1 day ago 0 replies      
Those stories would get onto /show and also the front page if they got more upvotes, so it is indeed a curation problem. If you're willing to do the work of rescuing good submissions that the rest of us missed, that's great! I wonder if we could integrate that back into HN somehow.
hunt 2 days ago 1 reply      
Nice work! I just noticed a bug though, trying to go to the next page of system software that is sorted by votes doesn't work. Instead of going to the next page, the first page is reloaded with /NaN appended to the URL, as such:


epicide 1 day ago 0 replies      
Minor issue and I understand why it happens, but the name of the submission here ("google.com") isn't terribly helpful :)


hoodoof 1 day ago 0 replies      
Show HN used to be one of the awesome things about HN - a real community showcase. It's not like that any more.

As HackerHunt says, /shownew is just a place that awesome new stuff is hidden.

pipu 2 days ago 0 replies      
This is very nice! Keep it up!
subsidd 2 days ago 1 reply      
The UI on mobile looks great, really clean job!

I have a question, does it also index submissions which never make it to the main show page?

Also, shameless plug : I am hosting an event inspired by ShowHn in Hyderabad, India ( showhyd.com )

edraferi 1 day ago 0 replies      
Very cool. Can I get RSS / Atom for specific topics?
superqwert 2 days ago 1 reply      
How are the topics assigned? Two Show HN posts I have posted recently (that are Open-Source and Javascript) don't have the relevant tags attached and I can't see a way of attaching them.
6DM 2 days ago 0 replies      
Are there/will there be categories for general products? I like seeing the new business ideas that come through every now and then that a developer worked on.
jerianasmith 2 days ago 0 replies      
I too liked the idea. Basically It all comes down to giving visitors an incentive to keep coming back.
SippinLean 1 day ago 0 replies      
Cool work, missing a "Why Cryptocurrency is Bad" section though
koolhead17 2 days ago 0 replies      
Can we get RSS feeds as well for categories :)
bitwize 1 day ago 0 replies      
Gives me memories of Yahoo!, in the best way possible.
a3n 2 days ago 0 replies      
Very cool.

Two bug reports:

1. Do you have a way to receive bug reports other than HN? :)

2. After doing a search, the left category menu disappears, and stays disappeared even after clicking the HH "home" link at top left. This is true for FF and Chromium latest-ish on LinuxMint.

There are two possible bugs here:a) do you actually want the left menu to disappear, and b) what your intent is for clicking the top left "HH".

- Go to HH.

- Search for something. Results appear as you type, nice. No indication from browser that a new page is loading; guessing no load by design. But left menu disappears.

- Manually erase search bar. Menu back.

- Type out a search again, menu disappears.

- Click "HH" at top left. Browser indicates a page is loading", but the search is not erased and (therefore?) the menu is still missing.

- Re-enter HH either by typing the URL into the browser location and clicking "make it so", or by clicking in from another site (like HN). Search field is empty, therefore the left menu is available.

EDIT: This was going to be a separate bug, but I think it's related toabove.

Scrollbar behavior is buggy.

- Clear site cookies. ("It's the only way to be sure.")

- Don't click anything, just move the mouse around and scroll, with mousewheel or dragging scrollbar. Scrollbar intact, entire page scrolls.

- Click in search field, don't type anything. Scrollbar disappears, mousewheel scrolling has no effect, regardless of where the mouse hovers. Entire page jumps slightly to right, appearing to "chase" the disappeared scrollbar.

- Type something in search field that gets results. Scrollbar returns, top of scrollbar is even with bottom of search field, page does not jump back; I'm guessing this is "your" scrollbar rather than the browser's scrollbar. Mousewheel only has effect if mouse is hovered below the search field, in the area region where the scrollbar exists.

- Click on any non-active area outside the search field. Search field jumps left very slightly. Scrollbar is back to full length (browser's scrollbar?), but there are now two separate scrolling areas:

- Hover mouse at or above search field level. The entire original front page, including the missing menu and the default "Today" list of sites, scrolls up into the area from viewport top to bottom level of search field (which also scrolls up and away with the rest of the page). Search results do not scroll.

- Hover mouse below the search field, mousewheel scrolls the search results, phantom page at top of viewport does not scroll.

- Drag the scrollbar, the "top" scroll area scrolls.

boltzmannbrain 2 days ago 0 replies      
meta, cool
dsuneps 2 days ago 0 replies      
GJ :o
hexhex 2 days ago 2 replies      
bebna 2 days ago 4 replies      
Sorry but, did you really had to use HH? I'm German and that only rings my right extremists alarm bells, because they use that for "Heil Hitler" (fuck them and him) for generations.
Americans dont need more degrees, they need training venturebeat.com
260 points by sylvainkalache  2 days ago   200 comments top 18
Animats 2 days ago 12 replies      
This is from the operator of a coding school. They have an interesting payment plan. "There is no upfront cost to join Holberton school. We only charge 17% of your internship earnings and 17% of your salary over 3 years once you find a job." That's more interesting than anything in the article. The school takes the financial risk on students not getting a job. This is the reverse of most for-profit education, which relies on loans and doesn't guarantee a job.

I wonder how this is working out for Holberton School.

msluyter 1 day ago 5 replies      
"There was a time when most people could make a career out of a skill, or stay within the same type of job, but workers today constantly need to adapt. They must become lifelong learners: Teach a student one skill and you got him one job; teach a student to learn and you got him lifetime employment."

It's far from clear to me that any beyond a small percentage of people generally, but perhaps Americans in particular, have the capacity to become lifelong learners. Not saying folks are stupid -- the problem isn't a lack of raw cognitive capacity, IMHO.

What actually makes someone a lifelong learner? Curiosity. If you don't _want_ to learn, it's going to be very difficult to do so, except perhaps while under the gun for short periods of time. And, unfortunately, our education system seems ideally tailored for stamping out children's natural curiosity. If that sounds overly pessimistic, I'd welcome a demonstration to the contrary.

I feel that the answer is basically "let's fix our education system," which I -- again, perhaps overly pessimistically -- believe to be politically impossible.

WheelsAtLarge 2 days ago 4 replies      
Yes, this is very true. Most companies aren't going to keep you long enough to spend lots of money to train you. In the old days you expected to be at the same company until you retired. These days you're lucky if you are there for five. Training should really start at the high school level. It's a disgrace that most students graduate from 12+ years of school without the training to get a good job but they can attend a codecamp for 1 year and get a starting job as a coder. It's no wonder we have a large number of workers that feel the american dream is slipping away. They never learned the skills they need to survive and thrive in our current economic system.

What's even sadder is that leaders are not focusing in this part of the problem but instead in silliness like who uses what bathroom. Things need to change.

wcummings 2 days ago 1 reply      
I cringe when I hear about plans for "tuition-free" 4-year college. I went to a fancy private high school, with a bunch of kids who graduated and went to fancy private universities. From what I can tell, it's done very little for them. The people who aren't working in tech or finance are literally on food stamps, extending their time in college, doing something like Teach for America, or working retail jobs. With 4-year degrees.

As if that weren't agitating enough, universities seem to relish their role in "not providing job training" and being "not a vocational school" and "producing well-rounded students". The government shouldn't be footing the bill to send people to these schools (through subsidized loans or other mechanisms), which for most students are glorified daycare centers for young adults who don't want to join the workforce yet. It's a huge waste of money, as the current levels of student loan debt attests to.

I don't know all the details here but an apprenticeship program seems like a step in the right direction. The federal government should forgive student loans (lay the cost on the universities)

panic 2 days ago 2 replies      
You could make the same argument from the worker's perspective. Companies don't need to hire graduates, they need to offer training for their employees! Why are we focused on what people should do when companies are the ones setting the bar?
PeterStuer 2 days ago 1 reply      
Jobs are for machines, life is for people. How many of the 'jobs' realy contribute to a better world? You can't blame people wanting them regardless as we tied in their whole right to live with 'employment'. I'd say especially Americans could do with a broad scientific and cultural renaissance style education.
eksemplar 2 days ago 5 replies      
I agree to some extend, but for a lot of jobs the degree is your skill. I employ a lot of low level coders who are trained to do what we want them to do. Haven't had trouble finding those.

What I do have trouble finding is someone with the theoretical knowledge on how to utilize machine learning patterns, which design patterns, paradigms, techs and libraries we should use and why and so on.

jliptzin 2 days ago 2 replies      
Education in this country needs to be totally reformed. High school is basically day care in most places. No reason the average 4 year degree cannot be completed at ages 14-18. Then if people want to get a masters they can go ahead, but it shouldn't be expected like going off to college is.
acscott 2 days ago 0 replies      
No evidence to support the thesis. No special personal perspective to share the secret sauce.

The unfilled job number of 6 million is an interesting data point. I'll give them that. But a job does not equal having a good life.

In short, the only data point to take away from this is waste of time. Am I being harsh? Not as harsh as un-empathic, presumptive declarative what people in America need (by the way you need to provide evidence and support to make claims like this at least).

jseliger 2 days ago 0 replies      
A useful point and one I make in this essay on boosting apprenticeships: http://seliger.com/2017/06/16/rare-good-political-news-boost.... The "College for all, all the time and everywhere" mantra needs to end. College is great but is not a panacea.
PaulHoule 2 days ago 1 reply      
Job training has been universally popular among US politicians for a long time, but the evidence that it helps people find work is weak. See


Aoyagi 2 days ago 3 replies      
I heard that most American degrees are, in terms of knowledge and whatnot, worth a high school diploma from some European countries. Any truth to that at all?
Kenji 2 days ago 3 replies      
>Companies, especially in the science, technology, engineering, and mathematics (STEM) industries, are shifting their recruiting process from where did you study? to what can you do?.

But are they really? Go look at job openings. You will find more often than not 'X degree required' (or at least recommended). If you don't have a degree, your application will often go straight into the trash, before the question "what can you do?" is asked.

dpflan 2 days ago 0 replies      
There is a New York Times article today (6/28/17) about the idea of skills vs. degrees.

> https://www.nytimes.com/2017/06/28/technology/tech-jobs-skil...

remotehack 2 days ago 2 replies      
When I was in High School, I went to 2 years of Vocational School at the same time and took advanced studies for college so I'd have something to fall back on one way or the other.
DiNovi 1 day ago 0 replies      
These articles for the most part live off of an imagined past. It wasn't hard to train someone to work an assembly line, which used to be all you needed to do.

Skilled employment such as plumbing electrical had and still have apprenticeships...

rbanffy 2 days ago 0 replies      
Can critical thinking and debate skills be taught? Throw in some geopolitics and I can support the idea...
mezuzi 2 days ago 1 reply      
The Common Lisp Cookbook lispcookbook.github.io
236 points by macco  1 day ago   161 comments top 5
jlarocco 1 day ago 5 replies      
For a more modern and up to date guide, Edi Weitz's "Common Lisp Recipes" is a great reference: https://www.amazon.com/Common-Lisp-Recipes-Problem-Solution-...

The more I use Common Lisp, the more disappointed I am that it hasn't become more popular. It really is higher level than Python, Ruby, or Perl but with nearly the performance of C and C++.

pnathan 22 hours ago 0 replies      
Oh hai Hacker News!

vindarel on Github has been doing an enormous amount of contributions in the past two weeks, and the modern look and feel (and some of the more modern content bits) are entirely due to their hard work.

eudoxia0 spent some time prior to that converting it and getting it modernized.

I'd like to congratulate them on their work, since we're on the front page of HN.

(I sort of steward the GH fork of the cookbook these days).

laxentasken 1 day ago 4 replies      
Started looking at getting into Racket and CL after reading up on lisp and there is something about it that really bugs me, in a good way. Sadly the train seems to have left the station regarding positions where you use lisp where I live. All is java and C-flavors.
znpy 21 hours ago 1 reply      
Hi, don't mean to interrupt anyone, just wanted to ask: why hasn't anyone made the common lisp hyperspec a prettier, more navigable and tooling friendly yet?
fokinsean 1 day ago 8 replies      
I've never ventured into a lisp like language. Would you recommend starting with common lisp or is jumping into clojure ok?

I have been in the JS, Java, Python world for a while and this looks like a language that could stretch my brain a little bit.

EPA Official Accused of Helping Monsanto Kill Cancer Study bloomberg.com
207 points by objections  1 day ago   95 comments top 10
DiabloD3 1 day ago 3 replies      
Relevantish studies: https://www.ncbi.nlm.nih.gov/pubmed/22331240https://www.ncbi.nlm.nih.gov/pubmed/28185844http://www.sciencedirect.com/science/article/pii/S1532045614...

And the worst one of all: https://people.csail.mit.edu/seneff/2016/Glyphosate_V_glycin...

So yeah, not only does it cause cancer, it causes a wide range of diseases: literally any one that can be caused by protein misassembly in ways that the body can't easily clear the protein (or if it does, the glyphosate can end up being reused instead of flushed out of your body) can be linked to this.

So, in around 5 years, if they come out with a study that Autism is caused by, or greatly enhanced by, glyphosate exposure in utero or during nursing (glyphosate concentrations are extremely high in breast milk in mothers exposed to the chemical), I will not be surprised whatsoever.

refurb 1 day ago 2 replies      
Ignoring the question as to whether or not "killing the study" was nefarious or not, this is a great lesson on what you should and shouldn't say in emails.

I remember getting training a long time ago from my employer. If you're discussing anything remotely controversial, pick up the phone or walk to the person's office. Don't put it in an email

Why? Because lawyers will find it and twist it however they want to make you look bad. Had a colleague annoy the hell out of you asking for something that didn't need to be done? Don't put "I deserve a medal for killing this study" in an email.

pavement 1 day ago 3 replies      
It never ceases to amaze me, how much controversy the name Monsanto stirs up.

The waters around this company are so murky, that it's less a question of whether the company operates an unregulated extrajudicial campaign against public awareness of their wares and activity, but rather, what's their spend on it, and how deep do they go?

Food security is frighteningly politicized, to the point of bald militarization (see: Canada's maple syrup). Almost nothing would surprise me.

costcopizza 1 day ago 1 reply      
Yet another nail in my personal "Why are agencies tasked with protecting human health even allowed to be lobbied in the first place?" coffin...
Natsu 1 day ago 1 reply      
Does anyone have a link to the actual court filing so we can see the document they reference? Is this an email found during discovery or something else?

> If I can kill this I should get a medal, Rowland told a Monsanto regulatory affairs manager who recounted the conversation in an email to his colleagues, according to a court filing made public Tuesday

> The case is In re: Roundup Products Liability Litigation, MDL 2741, U.S. District Court, Northern District of California (San Francisco).

gerdesj 1 day ago 3 replies      
"Roundup" isn't just for farmers. I deployed around 35 x 5 ml of the stuff in 5l of water here, at home (UK) two days ago. I took care to only spray in the early evening after bees and most other insects had buggered off but probably some woodlice ("jiggers" I think, in en_US) might have suffered.

I threat the stuff as a nasty poison but had no idea that it might be carcinogenic.

everdayimhustln 1 day ago 3 replies      
It's no surprise when corrupt regulators cozy with industry are ineffective. It's not that regulation is bad, it's the nuance of having sensible and effective authority, legislation and enforcement that is desperately needed to properly regulate chemicals as and how they are used in many industries.

This is almost as bad as Monsanto guy offered to drink glyphosate and then refused to drink it. https://youtu.be/ovKw6YjqSfM

TazeTSchnitzel 1 day ago 1 reply      
The evidence offered that glyphosate causes cancer is not very credible, though.
hoodoof 1 day ago 2 replies      
Presumably Trump will appoint for former Monsanto CEO to run the EPA?
calafrax 1 day ago 0 replies      
Keep in mind that regulatory agencies exist to protect corporations by giving them immunity from product liability lawsuits.

Just like California's labeling everything as a known carcinogen has nothing to do with protecting consumers but only serves the purposes of making corporations more immune from liability because they have duly warned you in compliance with the law that their products are harmful.

Vim Tutorial as an Adventure Game vim-adventures.com
276 points by KirinDave  1 day ago   59 comments top 20
wjakob 1 day ago 1 reply      
I finished VIM adventures a few years ago and found this approach quite helpful to improve my VIM muscle memory.

What's very unfortunate though is that the licenses are limited to 6 months. Occasionally I wish I could go back to one of the levels focusing on a specific feature, but paying another $25 every time just seems excessive.

filchermcurr 1 day ago 1 reply      
It's great until you realize it costs $50. A year.
alde 1 day ago 3 replies      
Haha, this is hard to play with cVim bindings on Chrome.An insert mode inside an insert mode.
papaver 1 day ago 3 replies      
can't say i'm a fan... i found the best way to learn to use vim is to force yourself to use it in an everyday environment. that means at work, where you need to get stuff done. and it will suck and hurt but it works. jump in the water and you will learn to swim. i wanted to quit around a dozen times but i stuck through it. sometimes i would copy and paste using the cursor and clicking so i could move forward, that's fine. you have to learn little things at a time. in 6 months i found i was fluent enough to get my normal work done with ease. 10 years later i love every movement of it and am still learning new things everyday... what an amazing editor...
mettamage 1 day ago 4 replies      
Hey! This thread keeps popping up! Yay :D

In another thread someone recommended me vimtutor isntead of this game. I played both, and for learning I like vimtutor much more.

To people who don't know vimtutor allow me to explain :)

Vim Adventures still gave me the feeling that vim takes 5+ years to learn and you need to be crazy dedicated and good to understand vim. That's not the fault of Vim Adventures, that's the fault of the folklore that surrounds vim (e.g. http://www.terminally-incoherent.com/blog/wp-content/uploads... )

30 minutes of vimtutor and I felt that I had "basic vim skills" and that I "could manage myself in a vim editor." I thought it would take 2 years to get these skills. In other words, if you feel you have realistic expectations about how hard it is to learn vim, try vimtutor!

That said this is a fun game!

stared 1 day ago 1 reply      
"If it is possible to gamify so seemingly boring things as learning keyboard shortcuts, then sky's the limit!" from https://github.com/stared/science-based-games-list#bonus
SPBS 1 day ago 0 replies      
I remember playing the first few levels of this game when I was still new and starting to learn vim. It marked my first foray into programming as it was a requirement for my basic programming module, so vim tutor was quite intimidating back then (huge wall of text). I believe I stopped using vim adventures when I found out about the paywall.

However, googling about vim had convinced me that it was this awesome piece of ancient software that was somehow better than modern editors, so I stuck with it. I ended up doing my assignments in vim starting with only the hjkl/<Esc>/i keys. Then whenever I wanted to do some shortcut I thought vim might be able to do I just googled it, got my mind blown, then internalised it. Or reading up on articles talking about the must-know vim shortcuts.

springogeek 23 hours ago 0 replies      
This inspired me to try harder with my learning of VIM: https://vimebook.com
MikeKusold 1 day ago 0 replies      
This is a great tutorial. Most people point out that most of this is in `vimtutor`, but I found this easier to consume.

That said, I got stuck when I had to learn how to switch buffers. I don't know if I missed a description with how to do it, or if it was missing.

tiirbo 1 day ago 2 replies      
I just started and I can't seem to get to the treasure chest. Sigh, am I that dumb?
budadre75 1 day ago 2 replies      
I remembered playing this in 2013 and it really helped me learning vim. Then I discovered roguelike games, which depend heavily on vim key bindings, especially hjkl for movements, played a bunch of them ever since and got really fluent in vim keybindings. Now besides using vim as text editor and play roguelikes, I have vimfx extension for browsing web under Firefox and vimium for Chrome. Now I just hope I can have vim editing functionalities in any text boxes, and I have been searching for a solution for years.
dak1 1 day ago 0 replies      
I found I learned more faster just using vimtutor, although the idea of learning Vim through a graphical game is still pretty neat.
pawadu 1 day ago 1 reply      
I tried this when it was first created. After 30 minutes of fighting with foreign key bindings I realized I can't stand vim even in the context of a game.

So yes, this game is the reason I will always be an emacs person...

aryamaan 20 hours ago 0 replies      
Let's say I want to learn an editor in and out, is it worth to learn Vim now or should I invest my time in Sublime?
Sodman 1 day ago 3 replies      
For anyone that continued past the paywall, is it worth it?

I started this game back when I was in college but stopped at the paywall because I didn't have the money at the time. Forgot about it till just now! If this can teach more advanced vim concepts via gameplay then I'd probably be open to buying it now than I used to be.

tehwalrus 1 day ago 0 replies      
I'll save my $25 for some time when I'll be not too busy to play all the way through. So, maybe at Christmas (if I even remember).
partycoder 1 day ago 1 reply      
I have been a IDE guy for years.But here at work everyone uses vim, so I gave it a try. I tried to set up the thing, and gave up shortly after that. Then tried SpaceVim, which was too clunky for my taste (I am sure it will improve, it's a rather recent project). So migrated to Spacemacs, which surprisingly supports vim style commands well.
sunilkumarc 1 day ago 0 replies      
This is really cool!
minademian 1 day ago 0 replies      
this is excellent.
Denied Entry haxx.se
279 points by antongribok  1 day ago   69 comments top 13
dang 1 day ago 0 replies      
jacquesm 1 day ago 1 reply      
Someone asked me today when I was going to visit the US again. I told them never. That is, not until this madness stops. The US now treats citizens from allied countries as enemies and that's just plain rude and not called for. There isn't a shred of evidence that any of this has made America safer in any way and it would be pretty easy to argue that it has made America less safe on account of pissing off large numbers of people both inside and outside the USA.
CaliforniaKarl 1 day ago 3 replies      
Something that I was curious about, and which others might also be:

>Sorry, youre not allowed entry to the US on your ESTA.

"ESTA" stands for the Electronic System for Travel Authorization, and is a program administered by US Customs & Border Protection (part of the Department of Homeland Security). It's used by people in Visa Waiver countries to submit information about themselves & their travel plans, before they depart their country of origin.

More info here: https://www.cbp.gov/travel/international-visitors/esta

Info on the Visa Waiver program: https://www.cbp.gov/travel/international-visitors/visa-waive...

I guess Mr. Stenberg could have applied for a B-1 Visa to cover the visit, but I thought the whole reason for the Visa Waiver program was to remove the need for people in member countries to get a B-1.

rbcgerard 1 day ago 1 reply      
"DHS has carefully developed the ESTA program to ensure that only those individuals who are ineligible to travel to the United States under the VWP or those whose travel would pose a law enforcement or security risk are refused an ESTA. While the ESTA website provides a link to the DHS Travel Redress Inquiry Program (TRIP) website, there are no guarantees that a request for redress through DHS TRIP will resolve the VWP ineligibility that caused an applicants ESTA application to be denied.

U.S. Embassies and Consulates are not able to provide details about ESTA denials or resolve the issue that caused the ESTA denial. Embassies and Consulates will process an application for a non-immigrant visa, which, if approved, will be the only way that a traveler whose ESTA application has been denied would be authorized to travel to the U.S."


projectramo 1 day ago 3 replies      
I think the reason for the denials is a change in the interpretation of the rules.

ESTA used to be for short term visits for any reason.

I think some people in CBP now interpret it to mean a short stay that is not "work" related.

You can look up all the academics -- which were definitely covered by ESTA -- who have had issues because some people at the border were not sure on how to interpret them.

They are not alone. Way back in 2007 or so (so before Trump, but after 9/11), I overheard a German visa officer tell an American that it was totally illegal for them to work if they just came as a visitor. He had to get a work visa and it was long and complicated.

He had made the mistake of telling her that he planned to hop over to visit some clients.

He then said something like "oh, ok, its not a work trip its just a vacation. Nevermind about the visa, good bye." and walked out.

I think someone interpreted the conference as "work", and that is what lead to all this.

I think they (whoever it is) should make it clear to the officers what the policy is, and I also think the policy should be that conferences and the like are covered by ESTA.

pacaro 1 day ago 2 replies      
Horrible though this is, it's (bizarrely) better than the pre-ESTA alternative. My brother was denied entry to the US in 2003 while traveling on a visa waiver. At this point he was detained (without access to counsel) until a suitable outbound flight was available, and deported. The red stamp in his passport renders him ineligible to apply for an entry visa until 2018. This is all on the discretion of one immigration official, with no recourse for appeal. The justification is that when you sign the visa waiver, you agree to all this.

Edit: my brother is not entirely innocent in all this, but knowing before you fly is an improvement

russellbeattie 1 day ago 2 replies      
Being denied entry is the same as being accused of a crime, but not being allowed to face your acuser nor defend yourself in any way. It's infuriating in every way, even if it doesn't affect directly. Why? Because someone, somewhere - most likely without a shred of empirical proof - decided that it would be "safer" if people aren't told why they were denied, because otherwise the "bad guys" would be able to circumvent the system or some other equally moronic rationale. We all know the type of close minded, ignorant pinhead who made that decision, and it's infuriating they actually have any sort of control over our daily lives.
Radim 1 day ago 2 replies      
It's popular to poo on USA these days, but I had a similar experience with Canada.

I was about to present at NIPS (a ML conference in Vancouver) in 2010, but was denied visa (Czech citizen - EU). No reason given. Those pesky PhD computer scientists, we don't want them in our country, ruining our economies! :)

I also cursed a bit, but unlike the OP, didn't cry (WTF?). It's their country, their rules, their loss (I think).

idkfa 1 day ago 1 reply      
People coming to unstable states like US or Ukraine, balancing on the edge of government collapse, should expect such things to happen with pretty high probability. I really can't see why this could be surprising at all.
rdiddly 1 day ago 2 replies      
Kudos to the QUIC working group for this absolutely correct response: "We wont hold any further interim meetings in the US, until theres a change in this situation."
irq 1 day ago 6 replies      
Was it really necessary for him to refer to his Twitter follower count so many times?
partycoder 1 day ago 2 replies      
By spreading FUD in this way, I wouldn't be surprised if major conferences happening in SF start moving to other countries: Apple WWDC, GDC, Google I/O, Salesforce's Dreamforce, Oracle's JavaOne... many people go through significant effort to cost their travel and it's no fun that with no notice you can get sent back home.

Really dumb move.

DiNovi 1 day ago 0 replies      
So sad. I wish half this country hadn't gone mad
Canada's top court backs order for Google to remove firm's website from searches cbc.ca
252 points by executive  1 day ago   226 comments top 23
ABCLAW 1 day ago 5 replies      
Tremendously impactful decision, regardless of which side of the case you support.

Interestingly, the majority addressed a Google argument centered upon concerns regarding the possibility of international censorship:

"Googles argument that a global injunction violates international comity because it is possible that the order could not have been obtained in a foreign jurisdiction, or that to comply with it would result in Google violating the laws of that jurisdiction, is theoretical. If Google has evidence that complying with such an injunction would require it to violate the laws of another jurisdiction, including interfering with freedom of expression, it is always free to apply to the British Columbia courts to vary the interlocutory order accordingly. To date, Google has made no such application. In the absence of an evidentiary foundation, and given Googles right to seek a rectifying order, it is not equitable to deny E the extraterritorial scope it needs to make the remedy effective, or even to put the onus on it to demonstrate, country by country, where such an order is legally permissible."

In other words, if an interlocutory order with international scope would violate foreign freedom of expression legislation (or other legislation in general), it would be possible to seek to vary the interlocutory order by raising those issues specifically.

This framework should be familiar to jurists in Canada, as it resembles the Paramountcy doctrine.

Whether or not this case will be widely used is unclear. This case is nearly 100% fact-perfect for the Respondent - it honestly looks like a civil-rights test case. It is very possible that imperfect everyday facts provide sufficient fuel for judges to distinguish this case from the case in front of them.

The dissent is interesting because it lists a number of pieces of evidence the dissenters would have needed to see before moving forward, including the impact of the first order on sales figures. These dissents often provide counsel with information regarding how to structure future cases in order to avoid outstanding concerns.

These are my first pass thoughts. I'll probably read it through another two times before the day is done.

kodablah 1 day ago 3 replies      
A hypothetical: Company A is in country X and company B is in country Y. Country X declares on behalf of company A that company B be blocked from searches worldwide (including country Y). Country Y declares on behalf of company B that company B cannot be blocked from searches (at least in country Y). Who wins in the global context? Whichever can/will fine the most? Does this give benefits to the overly-regulating country/region and is that what we want?

Another hypothetical question: Can Canada require some retailer in the US to remove another non-Canadian company's products in the US just because the retailer and a competing company both have presence in Canada?

escapetech 1 day ago 6 replies      
This wouldn't be the first time a government is requiring Google to modify their search results. This case parallels the "right to be forgotten" cases brought against them by the EU several years ago. There is a reason that civil liberties and human rights organizations like the ACLU are concerned about this precedent.

In the US, with the murders of unarmed civilians by law enforcement and subsequent acquittals occurring at an alarming rate with increasing public outrage, it might be only a matter of time before a court somewhere rules in the favor of a person found innocent who is suing to keep as many details of a particular murder off the Internet on the grounds that his or her constitutional rights being violated (i.e inability to find employment, friendship, etc), and companies such as Google being forced to comply with the court's rulings.

slantyyz 1 day ago 0 replies      
Canadian law professor Michael Geist has a pretty good summary/analysis of this case:


massar 1 day ago 0 replies      
The EFF is apparently on the side of ElGoog:


"Such a broad injunction sets a dangerous precedent, especially given that it is likely to conflict with the laws of other nations."

qb45 1 day ago 2 replies      
I've never expected to use a Russian search engine to evade censorship but there we go:



CobrastanJorji 1 day ago 0 replies      
> "Today's decision confirms that online service providers...have an affirmative duty to take steps to prevent the internet from becoming a black market."

Well that's terrifying.

gdulli 1 day ago 1 reply      
> Google voluntarily removed hundreds of webpages from its Canadian search results on Google.ca. But the material continued to show up on Google's global search results.

> So Equustek obtained a further injunction from the court ordering Google to remove the websites from its global search results.

> Google appealed and argued it was not a real party to the dispute, and that a global injunction would violate freedom of expression.

Google didn't object in principle to removing the listings from google.ca but did object to removing them from the the main site results. What does that mean, there's actually no objection in principle but there's enough technical challenge or cost to modifying the global results that they're willing to fight it in court?

timthelion 1 day ago 1 reply      
I was expecting the ruling to involve some really bad behavior such as selling misslabeled drugs or invading the privacy of private individuals. Instead it is a benign product relabling suit. IMO, product relabeling doesn't even harm the company who's products are relabeled.
seomint 1 day ago 5 replies      
Why don't they just use robots.txt to keep themselves out of Google's index?User-agent: GooglebotDisallow: /Has Google stopped using that directive?
chrisparton1991 1 day ago 0 replies      
I wonder if the website in question is listed on other search engines (I can't see why not).

Assuming this is the case, is it fair to force one company to expend the effort altering their search results when others don't have to? Did Google do anything wrong that Microsoft (Bing) or Yahoo didn't?

awinter-py 1 day ago 2 replies      
Ironically, now that G has been ordered by a court to take the content down, keeping it up is a form of protest, i.e. a comment on the law, i.e. political speech.
pavanky 1 day ago 2 replies      
This kind of stuff should be arbitrated in an international court. Giving an authority in a single country the say on what can and can not be seen on the internet world wide is a terrible idea.
lwlml 1 day ago 1 reply      
Every time something like this happens I wish YaCy (http://yacy.net/en/index.html) was in better shape.
Coffee_lover 18 hours ago 0 replies      
I am confused as to how a Canadian court could have any sway on search results in other countries? Is there even any legal precedent on the matter?
OscarTheGrinch 1 day ago 0 replies      
If a product or service is a proven scam, it is highly likely to be a scam in all jurisdictions. Googles own takedown procedure should have fixed this before it got to the courts.
sharemywin 1 day ago 1 reply      
Completely read that wrong. Thought it order google to remove it's own website addresses from search results. that would have been interesting.
ominous 1 day ago 0 replies      
Removing results from google makes so little sense.

Can I buy google ads pointing to those blocked websites?

mowenz 1 day ago 0 replies      
>"We have not, to date, accepted that freedom of expression requires the facilitation of the unlawful sale of goods."

That precise argument may be well-intentioned, but it threatens free speech because it sets a precedent placing a burden of acceptable effects and results of free speech.

In other words, should Tiamen Square be de-indexed globally because free speech in China does not require it. Should torrent trackers be de-indexed--free speech does not require illegal file sharing, after all? What about bit torrent clients? Tor browser?

This site may or may not be rightfully de-indexed, but it is not because of some limitations of free speech.

downandout 1 day ago 1 reply      
I wonder if this decision can be used as a framework to kill extortion-based sites such as RipoffReport.com that Google has aided and abetted for more than a decade. That would be wonderful.
crb002 1 day ago 1 reply      
Google should have country specific feeds (canada.google.com) instead of domains. Then location based filtering is forced on the browser where it should be.
known 1 day ago 0 replies      
Not honoring http://www.robotstxt.org/faq/prevent.html can be a felony;
Matthewiiv 1 day ago 1 reply      
This is total bullshit. Relatively benign ruling but sets a dangerous precedent.
       cached 30 June 2017 15:11:01 GMT