hacker news with inline top comments    .. more ..    16 Sep 2017 News
home   ask   best   3 months ago   
visited
1
AllenNLP An open-source NLP research library, built on PyTorch allennlp.org
130 points by varunagrawal  8 hours ago   14 comments top 3
1
TekMol 5 hours ago 4 replies      
Wow, is this really state of the art?

 Joe did not buy a car today. He was in buying mood. But all cars were too expensive. Why didn't Joe buy a car? Answer: buying mood
I think I have seen similar systems for decades now. I thought we would be further along meanwhile.

I have tried for 10 or 20 minutes now. But I can't find any evidence that it has much sense of syntax:

 Paul gives a coin to Joe. Who received a coin? Answer: Paul
All it seems to do is to extract candidates for "who", "what", "where" etc. So it seems to figure out correctly that "Paul" is a potential answer for "Who".

No matter how I rephrase the "Who" question, I always get "Paul" as the answer. "Who? Paul!", "Who is a martian? Paul!", "Who won the summer olympics? Paul", "Who got a coin from the other guy? Paul!"

Same for "what" questions:

 Gold can not be carried in a bag. Silver can. What can be carried in a bag? Answer: Gold

2
mamp 5 hours ago 0 replies      
This is very brittle: it works really well on the pre-canned examples but the vocabulary seems very tightly linked. It doesn't handle something as simple as:

'the patient had no pain but did have nausea'

Doesn't yield any helpful on semantic role labeling and didn't even parse on machine comprehension. If I vary it to say ask 'did the patient have pain?' the answer is 'nausea'.

CoreNLP provides much more useful analysis of the phrase structure and dependencies.

3
sanxiyn 4 hours ago 1 reply      
In "Adversarial Examples for Evaluating Reading Comprehension Systems" https://arxiv.org/abs/1707.07328, it was found that adding a single distracting sentence can lower F1 score of BiDAF (which is used in demo here) from 75.5% to 34.3% on SQuAD. In comparison, human performance goes from 92.6% to 89.2%.
2
Cost of iPhone X in 1957 bradford-delong.com
173 points by lsh123  9 hours ago   110 comments top 32
1
code_duck 4 hours ago 3 replies      
This comparison seems fairly fantastical and irrelevant. Something from within the history of consumer electronics would be more useful to consider, in my opinion.

For example, in 1982, the wildly popular low-end home entertainment computer, the Commodore 64, was released at $595, or about $1500 after inflation adjustment. On release in 1977, the Apple II was $1298, or over $5000 in today's terms. You could pay $400 per 4k of ram, so $6000 if you wanted one with 12k ram. If the iPhone X came out in 1985 for this price, it would have been $400 in 1982 dollars, or 40% of the C64 - with over 4 million times the storage capacity, to say nothing of the other capabilities.

That our modern mobile devices are popularly called 'phones' misses the point that they are used as general purpose computing devices, not primarily phones or even necessarily primarily for communication. They are also GPS navigators, cameras, calculators, recipe files, photo albums, alarm clocks, book readers, walkmans, home stereos, encyclopedias, wallets, and a lot more. People used to frequently pay decent sums to buy dedicated devices to perform many of those services.

2
nopinsight 7 hours ago 3 replies      
Having a billion transistors is not 10 times as useful as 100 million transistors. The diminishing return kicked in even before that. Unless a killer application needs a billion transistors, the value created over a previous iPhone is incremental.

I think a better way to measure value is

 invention's cost effectiveness = price / (time * improvement to well-being)
The denominator is how much the invention makes our life better as measured by (the amount of time * the extent it helps).

By this measure, smartphones are not bad but probably lose out to the Internet, washing machines, and air conditioners/heaters in hot/cold climate. Google is a pretty big improvement over older search engines and is worth quite a lot although the price we pay as privacy is not obvious.

3
abtinf 7 hours ago 1 reply      
Offtopic: The second of the three pictures at the bottom of the post, the one with the big yellow circle monitor looking thing, is a radar monitoring station. It was meant for long shifts, requiring dedicated attention from a soldier. In support of that purpose, it has a built in cigarette lighter and ash tray.

The picture is from the Computer History Museum. Well worth a visit.

https://commons.wikimedia.org/wiki/File:SAGE_Weapons_Directo...

4
dmichulke 26 minutes ago 0 replies      
This flies directly into the face of mainstream economists who claim that deflation is killing an economy and therefore requires positive inflation because otherwise "people would stop spending".

Here's the thing: Deflation got people spending money on an iPhone.

5
duhast 8 hours ago 1 reply      
Comparisons like that always assume the that price is constant with increased demand even when world GDP is spent on something. What would likely happen is first the price would go up and later down as the world manufacturing would shift to vacuum tubes production.
6
captn3m0 9 hours ago 4 replies      
Trying to answer the following might be fun as well: How much is a 1000 USD from today in that era in terms of purchasing power? What could you have bought with the equivalent amount of money, and how many people could have afforded the iPhone equivalent then?
7
thinbeige 33 minutes ago 0 replies      
Expected a figure after inflation adjustment and not a calculation of the parts' cost at that time. Still interesting.
8
deepnotderp 8 hours ago 2 replies      
It's unfair to point to one exponentially increasing technology and use that as an index for general technology.

On the other hand, it's interesting they choose to use the semiconductor industry as their example because progress is slowing in the semi industry due to the laws of physics with respect to standard CMOS and innovative rescues in either architectural form or through novel device physics are very unlikely due to industry's glacial pace and apparent allergies to innovation. A field that scorns young entrants is bound to die someday.

9
bhouston 2 hours ago 0 replies      
The last image is from the TV series Time Tunnel. The TV show featured a decommission SAGE computer as a major prop: http://q7.neurotica.com/Q7/scifi/Tunnel/https://en.wikipedia.org/wiki/Semi-Automatic_Ground_Environm...
10
c3534l 8 hours ago 2 replies      
For 150 trillion you could research how to make a smaller and cheaper transistor.
11
userbinator 9 hours ago 3 replies      
...and yet, even with all this computing power, simple tasks like loading a webpage full of text still take ridiculously long times.

Maybe not with a '57 vacuum tube (transistors actually existed at the time already) computer, but then there's this:

https://hubpages.com/technology/_86_Mac_Plus_Vs_07_AMD_DualC...

The massive increase in computing power has been accompanied by a corresponding increase in complexity. For better or worse.

12
drjackyll 1 hour ago 0 replies      
Comparison is inaccurate. iPhoneX has 256GB of Flash, not RAM. Vacuum tubes were used for RAM, while smaller and cheaper ferrite cores were used for permanent storage.
13
DigitalJack 8 hours ago 1 reply      
24 MHz? I cant make that work with a simple typo...
14
abecedarius 7 hours ago 0 replies      
If I'm reading http://jcmit.net/memoryprice.htm right, core memory in 1960 cost about $0.60/bit (with ~10 microsecond cycle time). It does list an only slightly faster transistor memory for 1957, maybe for use in the CPU?

I'm guessing these are historical prices, not inflation-adjusted.

15
ajross 8 hours ago 0 replies      
MLC flash stores more than one bit per transistor. Otherwise the numbers seem sound.
16
logicallee 25 minutes ago 0 replies      
For the punchline, the author should have followed up:

>taken up 100 billion square meters of floor spacethat is (with a three-meter ceiling height per floor): a hundred-story square building 300 meters high, and 3 kilometers long and wide

by saying,

"Oh and by the way this device has an edge to edge display that is so real you can hold it up seamlessly against a background while it invents a made-up image and draws it on, pretending it's part of reality.[1] It knows its position and can adjust to your movements. It also includes a photography studio that makes billboard-size full-color photos. It fits in your pocket and needs to have its battery recharged once per day. Also it's an entire telephone including video which it has a camera facing the front for. And you can talk to it, it has a built-in assistant. Basically, magic."

[1] https://images.adsttc.com/media/images/59b8/247c/b22e/38e2/0...

17
csours 9 hours ago 1 reply      
Things that haven't improved exponentially since 1957:

Batteries

Cars

Houses

People

Dishwashers

TVs

Things that have improved exponentially since 1957:

Disk drives

Integrated Circuits

RAM

18
amelius 1 hour ago 0 replies      
I think this post unnecessarily contributes to the overhyping of the iPhone. It could just as well be written for any other brand of phone.
19
k__ 2 hours ago 0 replies      
I guess I'll get a note 8 then.
20
Houshalter 8 hours ago 1 reply      
The comparison is a bit unfair because the technology of 1957 was primarily analog. A few rolls of film can store the same HD videos as an iPhone for instance. Video phones existed in the 1960s and just didn't catch on.

It's sort of an apples and oranges comparison. Obviously there are many things that couldn't be done with analog tech. But it's not as bad as you would expect with just naive comparisons based on the cost of vacuum tubes.

21
pavement 9 hours ago 0 replies      
tl;dr USD $150 trillion.
22
la_fayette 4 hours ago 0 replies      
this is a nice approach. would it be possible to factor environmental costs into this calculation?
23
nielsbot 9 hours ago 4 replies      
Why 1957?
24
dredmorbius 5 hours ago 0 replies      
I keep seeing these economic treatments of "how much progress has accelerated" (and have been seeing them for some 30-40 years now myself), and ... I'm starting to feel a case of three-card monty or the travelling dime problem -- keep moving the pieces quickly enough so that the audience^Wmarks don't spot the trick.

First off: yes, absolutely, the cost of provisioning and operating electronic memory data storage and processing has fallen phenomenally. DeLong makes that point abundantly clear:

in 1957, the transistors in an iPhoneX alone would have ... cost 150 trillion of today's dollars: one and a half times today's global annual product ... taken up a hundred-story square building 300 meters high, and 3 kilometers long and wide ... drawn 150 terawatts of power30 times the world's current generating capacity

But let's look at those comparisons right there.

The iPhone X costs $1,000, and for easy math I'll assume all of that is the memory storage (this is wrong, but it's not horribly wrong, on an orders-of-magnitude basis). If the 1950 cost was $150 trillion, then the price has fallen by at least 150 billion fold. (And in fact it's fallen more, because there's more than just memory in the device, so my easy math understates the case.)

Global GDP in 1955, or more accurate, GWP, was $5.4 billion. As of 2016 it was about $80 billion, or, just for round numbers, lets call that $5 billion and $100 billion.[1]

The multiplier is a factor of 20. Which, if I check maths, is somewhat less than 150 billion. Which is to say that whatever's been strapping white lightning to our capacity to chunk out memory circuits has not been strapped to the global economy as a whole.

Measures of the total built environment are difficult to come by, and even proxies for that seem at best obscure. Since DeLong specifies the idea of a 100-story-tall building, though, there is at least one interesting statistic that can be readily produced. Up until 1970, there was precisely one such building, and it was the Empire State Building, which held that record from 1931 until 1972 (at which time the newly completed World Trade Centers in New York City claimed the crown).

https://en.m.wikipedia.org/wiki/History_of_the_world%27s_tal...

Naturally, there's been some contention for that prize since. A total of four additional tallest structures are listed: the Sears Tower (completed in 1974), Patronas Towers, Taipei 101, and Burje Khalifa. If we look at the list of the world's tallest buildings, and use the ESB's 381 meter height as a minimum qualification, there are by my count 37 such structures. Again, this seems slightly less than 150 billion.[2]

Finally, energy consumption. In 1955, this was, roughly, 100 exajoule. In 2017 this is, roughly, 500 exajoule. The multiplier would be then ... 5. A number somewhat less than 150 billion.[3]

The question which arises out of this is what is it about information technology that allows for a 150-billion-plus increase in capabilities, whilst total GWP (20x), skyscrapers (37x), and energy (5x) have seen far, far, far less expansion?

There's another question which asks if we're actually including full costs, which I'll note but leave off the table for this discussion.[4]

But the question I would like to ask is what additional service value is being provided for all that the iPhone offers?

Consider that it is, ultimately, an information delivery device. And that the information end-consumer, the human tethered to it, has an almost ludicrously low consumption capability. Sure, you can deliver gigabytes or terabytes of source data to a human, but the amount of that which is absorbed, over the course of a day, amounts to ... a few megabytes, at most. And we're talking single digit values here.[5] What the iPhone can deliver is video, audio, images, and text. Through on a viewport roughly the size of a 3x5 index card. The equivalent 1955 technologies it replaces are a notebook, a telephone (and probably some form of answering service or secretarial pool), the not-yet-invented transistor radio, a deck of cards or pocket game, a newspaper and/or magazine, a paperback book, a letter. A pile of index cards itself.[6]

And ... the iPhone X carries any number of unintended consequences: the loss of liberal democracy, undermining a century-old tradition of advertising + subscriber based print media, journalism, adtech, concentration, possibly an entire generation.[7] Unintended consequences are a real bitch.

Delong's calculus is exceptionally insufficient.

________________________________

Notes:

1. Wikipedia. Which, coincidentally, is citing one Bradforth DeLong as its source.https://en.m.wikipedia.org/wiki/Gross_world_product

2. https://en.m.wikipedia.org/wiki/List_of_tallest_buildings

3. Gail Tverberg, after Vaclav Smil and BP: https://ourfiniteworld.com/2012/03/12/world-energy-consumpti...

4. Much of this revolves around the question of natural capital accounting. The good news is that this is entering mainstream economics, see the World Bank for example. The bad news is that it's still improperly founded. Steve Keen's work on energy in production functions is also of interest, though that admits yet another factor.

5. Consider audio. The human limit of perception is roughly 20 impules per second, a/k/a 20 Hz, which is the threshold at which beats become a tone. Given 86,400 seconds/day, at 20x, we've got 1.7 million bits of data, or 216 KB of audio-encoded pulses. For printed material, a 250 words/min reading pace is fairly typical, which works out to 2.16 MB/day, sustained for 24 hours.

6. a/k/a the Hipster PDA: http://www.43folders.com/2004/09/03/introducing-the-hipster-...

7. https://www.theatlantic.com/magazine/archive/2017/09/has-the...

25
ZenoArrow 5 hours ago 1 reply      
The factor that many seem to have overlooked with the $1000 price of the iPhone X is that, if it proves successful, it sets a precedence for future smartphones.

The spec improvements of the iPhone X do not line up with the accompanying price increase. Even by iPhone standards it's an expensive device. If you justify the price increases you're just telling Apple you're prepared to be short changed when the iPhone X's successor comes out.

26
snambi 8 hours ago 0 replies      
In 2057 iPhone X would have cost may be 99c?
27
madengr 8 hours ago 0 replies      
Though I doubt an iPhone could do all the processing those SAGE computers did, despite being infinitely more powerful as a general purpose CPU.
28
kimmy13 9 hours ago 3 replies      
this article is just made to justify the phone's price.
29
tdbgamer 9 hours ago 3 replies      
Honestly, who cares. A ton of modern inventions would've cost way more money in 1956. I'm not gonna go to the store and say, wow this microwave would've cost millions in the 50s, that's a super reasonable price. Compare 2017 phone prices to other 2017 phones.
30
guelo 8 hours ago 0 replies      
Is the point supposed to be that the iphone x is not expensive? If so it's a dumb point. Moore's law is exponential, news at 11.
31
ddmma 8 hours ago 0 replies      
Amazing prediction... A11 bionic chip then was alien science fiction and might be considered privacy issue in real life considerations such as rise of communists or power consumption issues. Still is not an argument on Apple X high price.. I like more Raspbbery Pi analogy
32
IncRnd 8 hours ago 0 replies      
Well, okay. But the monopoly would have also kept costs down. Yes - all iPhones would have looked identical, but everyone would have had one!

Another plus - the software stack would have been far smaller. You wouldn't need a 32 Gig phone just to install some apps. All processing would have been done in the cloud, on the mainframe. The apps would all have been dumb and only screen viewers for the mainframe.

:)

3
Why Dropbox decided to build its own infrastructure and network techcrunch.com
242 points by zachperret  12 hours ago   119 comments top 14
1
antoncohen 10 hours ago 4 replies      
Dropbox hasn't dropped AWS, they moved things off AWS as it made sense to. The article is talking about two things, the move of file storage and a network backbone. Neither of which were done recently.

The file storage move from S3 to Magic Pocket is detailed in these blog posts:

https://blogs.dropbox.com/tech/2016/03/magic-pocket-infrastr...

https://blogs.dropbox.com/tech/2016/05/inside-the-magic-pock...

https://blogs.dropbox.com/tech/2016/07/pocket-watch/

The network backbone is talked about here:

https://blogs.dropbox.com/tech/2017/09/infrastructure-update...

2
epa 11 hours ago 3 replies      
This means their EBITDA now probably shows profitability (servers they own are amortizable [the A in EBITDA], where are AWS is expensed.)

Edit: Amortization/Depreciation can <i>generally</i> be used interchangeably.

3
swampthinker 11 hours ago 5 replies      
I'll be 100% honest, I didn't realize Dropbox was still on AWS. You figure at a certain scale it makes more sense to run your own solution.
4
mc32 9 hours ago 2 replies      
What does AWS do when someone moves 500PB of storage off of their systems? Do they sit idle till some other big customer comes along or is their growth so phenomenal that a 500PB move off doesn't even slow new (storage) deployments at all and just keep up their pace?
5
ChicagoDave 11 hours ago 0 replies      
I had a chance to compare all of the file services a few years ago in a large corporation and I thought out of all of them, the DropBox engineers were the strongest. Microsoft second.
6
sandworm101 22 minutes ago 0 replies      
Three data centers "biult" by only he dozen people on the infrastructure team? Not possible. I wish articles like this wouldnt hype small teams where it is obvious that most all of the work was outsourced. It would seem that dropbox here was still operating as customer: ordering rather than physically biulding much of anything. Those dozen dropbox people were not running cables.
7
qaq 11 hours ago 1 reply      
People making sane choice very refreshing. I wish my company would make same decision.
8
iUsedToCode 2 hours ago 0 replies      
Dropbox is cool and all, but i hate their pricing model. I just want to keep about 50-100 GB there (i don't hoard stuff). I don't wanna pay $10 / month for that. At backblaze i pay less than $0.5 / month. I could pay quadruple, since Dropbox is a lot better service (and quite a different one, too). But not 20 times as much.

I know that Dropbox doesn't care about my tiny dollars and all. But why not let customers pay for what they use? This "constant growth" bullshit is probably the reason they don't care.

9
justonepost 10 hours ago 3 replies      
Is 500 petabytes really that epic?1500 tapes? I could probably stuff that in the back of my minivan.
10
shevy 10 hours ago 1 reply      
> While cost is always something that we consider, our> goal with our network expansion was to improve > performance, reliability, flexibility and control> for our users which we have succeeded in doing,> a company spokesperson told TechCrunch.

Who believes this?

It was, easily noticable by everyone, wanting to reduce thecost. Which is fine, everyone does so, so why not admit thatit was the primary impetus?

I would not want to outsource control over any larger companythat I were to run to other, even bigger companies.

11
kibwen 10 hours ago 0 replies      
The article is dated today, but isn't this news from a few years ago? https://www.wired.com/2016/03/epic-story-dropboxs-exodus-ama...
12
paxy 11 hours ago 0 replies      
They did this a while ago.

Here's an article from last year with a lot more detail - https://www.wired.com/2016/03/epic-story-dropboxs-exodus-ama...

13
goptimize 10 hours ago 3 replies      
"Were talking about a company that had 1500 employees, with just around a dozen on the infrastructure team" - what the rest 99% of the company is doing, marketing?
14
tcptraceroute 11 hours ago 0 replies      
The title here isn't quite accurate.

Dropbox has moved user data from S3 to its own colocated data centers over the past few years, and is also doing compute in those data centers too. The compute actually existed for quite a long time - in the past you'd be talking to a Dropbox run server which would connect back to S3 to retrieve data.

Dropbox is definitely still an AWS customer, just not a major S3 or EC2 customer anymore. For example, all transactional email uses SES, and DNS is hosted on Route 53.

4
All the Linear Algebra You Need for AI github.com
200 points by stablemap  11 hours ago   26 comments top 12
1
aqsalose 3 hours ago 0 replies      
"All the Linear Algebra You Need".

The title is misleading. In general, I like an applications-first approach, but I have hard time liking a tutorial with the title like that and which proceeds to say things like

>Neural networks consist of linear layers alternating with non-linear layers.

without providing a proper definition nor intuition what is linear and what is not. Next step they are constructing NN layers of ReLU's and not telling what is a rectified linear unit and only barely hinting it is supposed to do ("non-linearity").

The article is not a worthless, it's a nice tutorial to building a NN classifier for MNIST, but don't expect full mastery of the mathematics relevant in understanding NNs after reading this tutorial.

Chapter 2 of the book by Goodfellow et al. is a serviceable concise summary of the relevant concepts, but I don't think that chapter alone is a good primary learning material if you are not already familiar with the subject. For that I'd recommend reading a proper undergrad level textbook (and doing at least some of the problem sets [1]), one example: http://math.mit.edu/~gs/linearalgebra/ and then continue with e.g. the rest of the Goodfellow's book.

[1] There's no royal road into learning mathematics. Or for that matter, learning anything.

2
jwdunne 2 hours ago 0 replies      
Hrm. I find a lot of stuff on linear algebra not so intuitive or motivating. It's a bit harder to get through. A big motivator is the sheer scale of application, along with calculus. Yeah it's a must for AI but really linear algebra underpins much much more and gives you awesome problem solving tools in general.

As for intuition, a better approach may be to get 3 good books and cycle through them. Where one doesn't feel so intuitive at a given point, the other 2 might. They will explain things in a different way with different examples. Connecting the ideas across 3 sources will help it sink in.

For a light intuitive introduction, give this a try:

https://betterexplained.com/articles/linear-algebra-guide/

Maybe it's a little too light but the idea is important: voraciously grasp the ideas intuitively. If none are forthcoming, try to make one.

Another Linear Algebra book, by Heffernan, was a 'third' book:

http://joshua.smcvt.edu/linearalgebra/

It's free too. I think someone else has mentioned No Bull Linear Algebra. That's a cool book too.

I had to put my learnings aside a few months ago but I made more progress this way than just following one book.

3
ivan_ah 7 hours ago 0 replies      
More linear algebra stuff to read with you weekend morning coffee: https://minireference.com/static/tutorials/linear_algebra_in...

See also Section VI in this SymPy tutorial https://minireference.com/static/tutorials/sympy_tutorial.pd... or LA topics covered with numpy http://ml-cheatsheet.readthedocs.io/en/latest/linear_algebra...

4
hal9000xp 4 hours ago 6 replies      
Is there any book which actually explains where matrix and its rules come from? Instead of throwing on you matrix multiplication rules in dogmatic way so you blindly follow them like mindless robot?

I know there are lectures in YouTube from 3Blue1Brown:

https://www.youtube.com/watch?v=kjBOesZCoqc&list=PLZHQObOWTQ...

So I want a book which covers linear algebra in the similar manner.

5
bra-ket 24 minutes ago 0 replies      
a great intro to linalg for programmers is "Coding the Matrix: Linear Algebra through Applications to Computer Science" : https://www.amazon.com/Coding-Matrix-Algebra-Applications-Co...
6
derekmcloughlin 5 hours ago 0 replies      
I found that chapter 2 of the Deep Learning book by Ian Goodfellow, Yoshua Bengio and Aaron Courville is quite good.

http://www.deeplearningbook.org/contents/linear_algebra.html

7
dragandj 4 hours ago 2 replies      
All the linear algebra that is needed for AI is matrix multiplication and broadcasting?

Am I the only one who is baffled by this?

8
Yuioup 4 hours ago 0 replies      
Thank you for this! I've been re-learning Linear Algebra since my question[1] was answered on an earlier HN thread[2] on fast.ai. This will definitely help.

[1] https://news.ycombinator.com/item?id=14878199

[2] https://news.ycombinator.com/item?id=14877920

9
tw1010 4 hours ago 0 replies      
Right now, sure. But give it five years and you'll no doubt also need at least a years worth of undergraduate (pure) math.
10
backpropaganda 7 hours ago 0 replies      
All the linear algebra you need for AI: http://cs229.stanford.edu/section/cs229-linalg.pdf
11
methodover 9 hours ago 2 replies      
Is there a website that could interpret tbis... iPython notebook, I think it is? I'd love to read it on mobile.
12
alishan-l 8 hours ago 0 replies      
Thanks for sharing!
5
Chinese maze: Village makes giant QR code from trees bbc.co.uk
34 points by abalog  5 hours ago   7 comments top 3
1
novalis78 35 minutes ago 0 replies      
Yes, my car had one too, so anybody could tip it when they thought I was driving well. Fun Bitcoin project of 2012. Though hardly anybody knew what it was, it actually did get some tips.
2
heroprotagonist 2 hours ago 4 replies      
I tried two QR readers on the photos in the article. They did not detect the code.

Would it have been much more effort for the photographer to take a photo that was scannable? Or is there just too much variance in the arrangement and no reader would actually scan it even without a photo?

3
qz_ 2 hours ago 0 replies      
I LOL'd at the site headline: "Chinese maze: Village makes giant tech code from trees"
7
Improbably Frequent Lottery Winners cjr.org
129 points by peterkshultz  10 hours ago   54 comments top 14
1
cyberferret 35 minutes ago 3 replies      
I can't say too much, because they are a current client of mine, but I've been doing work with a wholesaler for one of the biggest lotteries here in my country, and they seem to have a LOT of customers who bulk buy lottery tickets in what seems to be patterns.

I am talking hundreds of thousands of dollars of lottery tickets per week. Their winning are also along those lines. I don't know about the industry, but these sound like professional lottery players who check odds and play the system much as they would the stock market. They probably have set systems in place and complex prediction and statistical algorithms which ensures they win just more than they lose?

I am thinking that 'pro' players such as these might skew the statistic for winners to a large extent?

2
danso 8 hours ago 0 replies      
That's funny to see this article. I remember an article about it -- also in CJR -- when the original series broke in 2014 [0]. Looks like this article, which is from today, is about other reporters taking it nationwide. Which is great. One of the more confounding things (to me) about journalism is how a great investigation in one state isn't done in another. Not by the reporting team of the first story, but by any journalist in other states, because it's easy "money", in the sense that you have a framework to copy/imitate, and if there was a corrupt system in one state, there's likely to be something similar in all the other 49 states. And it will still be "news" because of how fundamentally different state laws and bureaucracies are. But then again, sometimes newsrooms have a culture of not wanting to be seen as doing something that has been perceived to have been done, even if it was by a non-competitor (e.g. a non-Florida newspaper, in this case)

Seeing investigative efforts replicated nationwide not only makes for 50 unique-in-their-own way stories, that benefit 50 different jurisdictions. You also get bonus meta-stories about states that are severe outliers, when the 50 are compared and contrasted.

[0] http://archives.cjr.org/united_states_project/palm_beach_pos...

3
fergie 7 hours ago 1 reply      
Years ago I had a friend who's father made a living smuggling cigarettes (and probably other stuff) from Northern Africa to Gibraltar. He laundered his income by buying winning lottery tickets.
4
rodrigocoelho 4 hours ago 0 replies      
Never forget Joo Alves, the Brazilian politician who had won part of 200 lotteries, because, as he said, "God helped me, and I made money".

http://www.nytimes.com/1993/10/30/world/new-government-corru...

5
projectant 3 hours ago 1 reply      
Clarance Jones http://archive.boston.com/business/articles/2011/11/05/frequ...

TL;DR - Audited by IRS but had 200 boxes of losing tickets & records in storage facilities, which the auditors apparently didn't want to search through. He ended up winning his defense against the state's tax claims as well.

How does he win so much? Is he psychic?

6
cyberferret 3 hours ago 1 reply      
Persistence might be key? Some people may be addicted to gambling with lottery tickets, or may just buy more than others in the same area.

I remember once early in my IT career, I was doing work for the local government agency that was responsible for issuing all the 'scratch the ticket and find 3 matching symbols' lottery games in our state. They used to sell the tickets at the front counter of the agency office as well as various shops and newsagencies around town etc.

While I was working on a PC in the manager's office, another manager came in and said that one of the games had sold ALL tickets at the other retail outlets, and that they had about a hundred $2 tickets left at their front desk and the main prize of $1000 hadn't gone off yet.

When one manager left the room, the other manager looked at me and said "Well, the rules forbid us from buying those tickets, but if you have a spare couple of hundred on you, I'd buy up all those tickets at the front counter...". I thought it was all a bit weird and didn't bite, but I found out later that a friend of someone who worked there bought all the tickets, and sure enough, won the $1000 prize in that particular game. Made me wonder how many 'friends' of people who worked there were in on the old unsold ticket grey marketing?

Also while growing up, there was a family that lived down the road from us who always seemed to win almost every giveaway competition in our town - free TV's, food, whitegoods, airline tickets, you name it. One day I was actually there visiting their son who was about the same age, and I noticed that their mother was entering competitions almost as a full time occupation. She would get various catalogues and our local newspaper and sit down with the kids every afternoon cutting out and sending out coupons, competition entries and sweepstakes all the time - they would have to be sending out at least 20+ per day. Way to stack the odds in their favour.

They eventually left town in the most spectacular fashion - when Michael Jackson did one of his huge tours of Australia, the family won a set of tickets to see him in a sweepstakes, then promptly bought tickets for ALL of his other shows in other states in Australia! This information got leaked to the press, and Jackson found out about it, and ended up inviting this family to Neverland in the US where 2 of the kids ended up staying for years.

You that old saying - you make your own luck. I think this particular family lived it.

7
nrmitchi 8 hours ago 0 replies      
This post seems to be a summary of an entire series relating to 'Improbably Frequent Winners'.

The first post of the actual series, which is a lot more in depth, is here: http://www.pennlive.com/watchdog/2017/09/defying_the_odds_pa...

8
jmharvey 7 hours ago 3 replies      
It's too bad the article doesn't go in more depth. There are a few plausible explanations for why people claim an unusually high number of prizes:

1) Someone is buying a huge number of tickets, and just winning proportional to the amount that they spend.

2) Someone is a "10 percenter." They buy winning tickets at a discount to the prize amount, collect losing tickets that they didn't actually buy, and commit blatant tax fraud by claiming "losses" that offset their "winnings."

3) Someone knows which tickets are going to win before buying them.

All three of these stories are interesting, but they're very different. Most coverage seems to assume (2), but stops at "tax agency is incompetent at detecting blatant tax fraud" without exploring why the tax agency is so bad at what they do.

9
jimhefferon 2 hours ago 1 reply      
I don't see that anyone has yet referenced this article.

https://www.wired.com/2011/01/ff_lottery/all/1

It explains how to win (an old lotto), and more importantly why there is a way to win.

10
Declanomous 8 hours ago 4 replies      
I have trouble understanding how winning the lottery this often doesn't immediately raise red flags for fraud. Is there some reason that people might buy winning lottery tickets?
11
MarkMc 5 hours ago 1 reply      
There is a bigger issue here: Why wasn't such an analysis done 50 years ago? Where is the incentive for people to uncover fraud?

All governments should have a rule that says: if you uncover fraud or waste in a government department you get 5% of the money that taxpayers save.

12
inetknght 8 hours ago 2 replies      
I sincerely hope that the states would be willing to look at Oregon and Florida's solutions to the requests. I actually also really like the CD being issued, because it's a physical thing that you could take into court in contrast to a web page (which often isn't quite as useful these days because all of these fancy web apps) and know that it hasn't changed one iota.

It'd be better if it were a usb thumb drive if it's to be a bit more modern.

13
ryacko 6 hours ago 2 replies      
it's possible, some people have gotten struck by lighting over a dozen times, which in theory is as likely as winning the Powerball twice. (1 in 12,000 likelyhood of being struck over your lifetime)
14
romeopov 6 hours ago 2 replies      
Has anyone considered some of these people might be psychic? Professor Daryl Bem's published study "Feeling the Future" comes to mind.
8
Equifax Releases Details on Cybersecurity Incident, Announces Personnel Changes equifax.com
131 points by diggernet  13 hours ago   136 comments top 12
1
azurezyq 12 hours ago 5 replies      
If a hacker can use just one CVE to break into your system and do a database dump (or equivalent), the system is architecturally wrongly designed and only being protected by a single layer of security. Which means, any one from the inside can access pretty much the info as the hacker, which is horrible.

For example, are the items in DB encrypted? Are database backups encrypted? Are different items encrypted using different keys? I don't think EFX did it right.

2
yebyen 13 hours ago 2 replies      
Wow, MarketWatch is right. They are trying to erase Susan Mauldin (the former CISO), she is not even mentioned in this article. Does anyone with Google juice have a way to recover the interviews that are referenced in [1]?

I watched them before they came down on Sept 10, and they were eye opening. I can say with certainty that the transcripts are not complete, because I remember "resistance to cloud is futile" and other such gems which are nowhere to be found in the partial transcripts that you can still find on the linked archive.is pages.

[1]: https://hollywoodlanews.com/equifax-chief-security-officer/

3
noncoml 13 hours ago 2 replies      
- In addition, credit card numbers for approximately 209,000 U.S. consumers, and certain dispute documents with personal identifying information for approximately 182,000 U.S. consumers, were accessed.

This looks really bad. I don't know what's the point of letting them operate anymore. They utterly failed on the single thing they were supposed to do.

4
guelo 12 hours ago 2 replies      
The CVE and the patch came out on March 7. Exploits were already being detected in the wild at the time by perimeter security vendors. See http://blog.talosintelligence.com/2017/03/apache-0-day-explo...

Equifax "believes" that the hackers got in on May 13. They had some kind of intrusion detection system that finally detected the intrusion on July 29. 5 months after the "Critical" CVE alert went out. During that time security vendors were adding firewall rules to stop the attack. But apparently Equifax didn't have any other security in front of the Struts server.

That just seems like unconscionable incompetence and malpractice for such a high value target.

5
heisenbit 34 minutes ago 0 replies      
They are really in deep trouble and need to manage the story carefully. If people on a broad basis start freezing their credit history (one of the most effective means to protect against abuse) their and their competitors ability to sell data to other parties will suffer (see also https://wolfstreet.com/2017/09/15/equifax-sacks-2-executives... )
6
rmrfrmrf 7 hours ago 0 replies      
Actually looking at the CVE made my stomach drop. That is a horrible horrible bug. Getting access to the shell while under the web app user's environment potentially means that all secrets were available either in the app server administration, the user environment, or in a readable file on the system. Effing yikes.

Also knowing how JNDI usually gets configured on app servers (sometimes with credentials and all) would have made recon ridiculously easy.

I think some have alluded to it, but what some people in the comments don't seem to understand is that RCE is an "all bets are off" type of situation. DEFCON 1 to be sure. Prevention is really the only good answer.

7
qaq 51 minutes ago 0 replies      
For everyone patting themselves on the back for how much better they are at securing their data realize that for way over 80% of incidents the attack vector is email and social engineering. Look at Red team exercises of competent teams and try to honestly answer the question would have they succeed with that tactic at your company. So yes having much better practices vs what we see here is very important but will not really help much if you are being targeted by a competent adversary.
8
simpfai 12 hours ago 7 replies      
Failure to apply a patch for a two month old bug led to this entire nightmare scenario. What are some best practices to ensure that ones dependencies are always up to date?

-asking as a relatively inexperienced dev

9
niij 11 hours ago 0 replies      
They were able to use a subdomain for their announcement, but not for their Breach service which asks for your SSN.
10
blacksqr 11 hours ago 2 replies      
Gotta admire the artful way they gave the appearance of disclosure while avoiding answering the most damning question: why did it take so long for them to patch Struts?

"The particular vulnerability in Apache Struts was identified and disclosed by U.S. CERT in early March 2017."

"Equifax's Security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure."

?????????!!!!!!??????

"While Equifax fully understands the intense focus on patching efforts, the company's review of the facts is still ongoing."

11
21 12 hours ago 1 reply      
What about storing sensitive data in something like a HSM, which rate-limits access rate, so you could only lets say access 10000 records per day.

Yes, developing against such a system might be annoying (think about updating a new piece of data for all records).

But it feels to me that we need a way to rate-limit access to sensitive data, to prevent wholesale dump in a short time. But you still need other systems in place, to prevent a hacker lurking around for months until it gets all the data.

12
dingo_bat 7 hours ago 0 replies      
Do they plan to change their Chief Security Officer who is a MA in Music Composition?
9
German foreign spy agency BND attacks the anonymity network Tor netzpolitik.org
143 points by y7  16 hours ago   37 comments top 8
1
jfaucett 14 hours ago 0 replies      
For those who can read German, here's the original with a lot more information including cited documents and graphs and charts. https://netzpolitik.org/2017/geheime-dokumente-der-bnd-hat-d...
2
mirimir 11 hours ago 3 replies      
> How exactly the spy agencies want to crack Tor remains vague.

> Precisely how the BND plans to chop Tor is unfortunately redacted in the document we obtained. But as before, the spy agency refers to public research. To implement the attack, it is likely that the spies runs their own servers in the Tor network. M.S. points to passive snooping servers, which are presumably operated by the NSA, and emphasizes the protection of the anonymity of the spy agencies.

And indeed, there are no specifics.

Tor Project acknowledges that Tor is vulnerable to global adversaries. With enough intercepts, they can correlate traffic at various relays, and connected users and servers. There's nothing magic about onion services. It's just that there are seven relays between users and servers, rather than just the normal three for Tor.

But hey, it provides better anonymity than any alternative. Other than meeting in remote locations, anyway. And you can add VPNs to the mix. I always use Tor through nested VPN chains. That adds misdirection. But perhaps most importantly, it adds latency and jitter, which mitigate traffic correlation attacks.

Edit: As a fun science project, you can play with traffic correlation between you and your private onion service. You use unlisted private bridges as entry guards, both locally and for the onion service. So it's only your traffic that gets analyzed. And you don't need sophisticated software. Wireshark and a spreadsheet are enough. So you have packet captures from your local machine, and from the VPS running the onion service. Using Wireshark, you export bitrate in millisecond bins. Then in the spreadsheet, you have two columns, one with each bitrate series. Just create a third column for the product. Each sheet will hold 1E6 lines, or 1000 seconds. Excel works best, because it uses multiple cores. R would be better, because you could crunch segments in parallel.

3
KGIII 14 hours ago 3 replies      
I thought it was well established that timing attacks could identify the user? My understanding is that if you remain on the network, domains ending with .onion, you're still able to remain anonymous.
4
jpelecanos 8 hours ago 1 reply      
Based on the article, it seems that the German government combined foreign and signals intelligence into one agency, the BND. In the US, however, Truman separated them into the CIA [0] and the NSA [1], respectively. Thus, is there a specific reason for BND's dual role?

[0] https://www.cia.gov/library/center-for-the-study-of-intellig...

[1] https://www.nsa.gov/news-features/declassified-documents/tru...

6
yllaucaj 14 hours ago 1 reply      
Could someone explain the bit about "The Internet for Dummies" at the beginning? Why is that quote there?
7
tryingagainbro 4 hours ago 0 replies      
So BND and NSA (+ five eyes) can do this. It's probably super-safe to use in smaller countries, unless you talk about stuff that DEA /CIA /Counter Terror might be interested in.
8
CommentCard 13 hours ago 2 replies      
Couldn't you circumvent this by using a VPN to access TOR? It would obfuscate the entry point.
11
Modern JavaScript Tutorial github.com
41 points by galfarragem  4 hours ago   8 comments top 3
1
grondilu 13 minutes ago 3 replies      
Question:

I started a JS project lately, I tried to write it as modernly as possible (using as many ES6 features[1] as I could), and yet I stumbled on difficulties that I blame to the lack of type system and polymorphism.

I'm considering writing my project in a typed language that compiles to JS, such as Scala or Haskell. But I would have to learn that language basically from the start.

Is it a good idea and if so which language would you recommend?

2
faitswulff 15 minutes ago 0 replies      
This is the source for the website at https://javascript.info/ . It's actually quite good - I'm reading about garbage collection at the moment. Looking forward to the inheritance/prototype section, something that's never quite stuck for me.
3
edzo 1 hour ago 1 reply      
Do you guys know a good tutorial for focused on server side js (node, express, etc)Whats the best resource to learn node?Thanks in advance
12
Practical File System Design: The Be File System (1999) nobius.org
123 points by tosh  14 hours ago   44 comments top 8
1
jasode 1 hour ago 0 replies      
Since the author of BeFS (1999), Dominic Giampaolo, is the lead architect for Apple's APFS (2016), it would be more helpful if there's an article that shows the delta between BeFS and APFS.

In the intervening 17 years, Dominic Giampaolo surely learned new things that invalidated some assumptions in 1999 and/or discovered new demands on file systems that he didn't foresee.

Imo, the intellectual evolution from the vantage point of 2017 & hindsight would be much more interesting than trying to read through a 247 page pdf.

2
kobeya 13 hours ago 6 replies      
I remember reading this book way back when. It is a great introduction to file system design. HOWEVER, the core thesis of BFS is basically "Let's see what happens when we make a filesystem that is more like a database." The TL;DR of the BeOS experience is that this is in fact a terrific idea, if you don't care about backwards (POSIX) compatibility.

The book is terribly dated though and I'm not sure I would recommend it to anyone except as a historical reference. It lacks much of the latest file system research that has gone into things like ZFS and Btrfs.

If you want to go the BFS-like route, then I recommend just accepting the premise and getting a book or other resource on DBMS design, getting one on modern file system design, and then merging the two yourself.

4
nayuki 11 hours ago 0 replies      
A 27-minute presentation: https://systemswe.love/archive/minneapolis-2017/ivan-richwal... ; https://player.vimeo.com/video/209021697

"Metadata Indexes & Queries in the BeOS Filesystem", by Ivan Richwalski, at Systems We Love.

He explains cool features like extended attributes in files, and search queries whose results are updated in real time.

5
kev009 13 hours ago 0 replies      
Maybe relevant to see how it affected APFS design decisions due to Be FS developers working on APFS.

I am hoping Apple releases APFS with the xnu 10.13 code drop.

7
dmolony 14 hours ago 2 replies      
Please put [PDF] in the title.
8
alexsderkach 4 hours ago 0 replies      
A few years ago, in college, this was a holy grail for our final assignment
13
Interview: Apples Craig Federighi about Face ID techcrunch.com
145 points by craigkerstiens  16 hours ago   216 comments top 30
1
DCKing 15 hours ago 8 replies      
I don't get it. This focus on FaceID seems to me some successful marketing spin by Apple, and the media seem to be drinking the Coolaid (again). Face recognition in my book is not a replacement for a fingerprint reader.

See I don't doubt that FaceID will be good. I don't doubt the quality Apple provides when they introduce features like this. I'm sure it will work amazingly well, because unlike almost all previous implementations Apple actually makes purposeful hardware for it.

The thing is, I don't want to look at my phone to unlock it. With a fingerprint reader - on my current Xperia and my current iPhone - my phone is unlocked before it faces me. I unlock it when I get it out of my pocket. I unlock it while lying on the table when it's facing the ceiling. I unlock it to peek at messages in meetings under the table. Needing to face your phone to unlock it seems to just be a really weird concession to me. As others point out, it also seems to be a minor concession security wise. It seems to be some typical form-over-function Apple thing again - UX and security concessions for a better looking phone. They should've slapped a fingerprint reader on the back once it turned out the under-the-screen fingerprint reader wasn't viable.

Having to go through some ritual to face your phone and slide it open to unlock feels like a hassle and a downgrade. Especially on a 1000+ phone. So yeah, the Apple distortion field is still a real thing it seems. Or maybe the way I use my phone is weird. But either way, I don't get it.

2
wyc 16 hours ago 11 replies      
> So, if you were in a case where the thief was asking to hand over your phone you can just reach into your pocket, squeeze it, and it will disable Face ID. It will do the same thing on iPhone 8 to disable Touch ID.

I imagine that authorities will catch on pretty quickly: "keep your hands up and away from your pockets while we retrieve your device." It would have to happen before any kind of duress.

3
gnicholas 39 minutes ago 0 replies      
Interesting to see accessibility (for vision-impaired users) pop up here. Good to see that Apple thought about that and came up with something. I wonder if enabling the accessibility mode will result in haptic feedback that the device was unlocked.
4
usaphp 16 hours ago 1 reply      
> On older phones the sequence was to click 5 times [on the power button] but on newer phones like iPhone 8 and iPhone X, if you grip the side buttons on either side and hold them a little while well take you to the power down [screen]. But that also has the effect of disabling Face ID, says Federighi. So, if you were in a case where the thief was asking to hand over your phone you can just reach into your pocket, squeeze it, and it will disable Face ID. It will do the same thing on iPhone 8 to disable Touch ID.

Interesting. I did not know that

5
favorited 16 hours ago 2 replies      
I assumed this was a link to Conan's sketch. Gruber pointed out how surprising it is "that Federighi is famous enough to spoof."

http://www.loopinsight.com/2017/09/14/craig-federighi-on-con...

6
kiliankoe 12 hours ago 1 reply      
How would FaceID be used to authenticate In-App-Purchases? Currently it's a finger on the home button, but I'd be somewhat scared if looking at the screen is all that's necessary to confirm a purchase. I'm pretty sure a second confirmation will be used, but I'm curious.
7
minimaxir 16 hours ago 2 replies      
A relevant note from the interview is official confirmation that the 5-presses-of-sleep/wake-disables-Touch-ID is intended as a privacy move. (and adds new info that the iPhone 8 has an easier squeeze-only trigger for this feature.)
8
makecheck 15 hours ago 2 replies      
What I worry about is drivers. Texting while driving has been illegal in multiple places for years but to this day it is downright easy to find an example of someone still doing it. Now imagine these fools deciding to turn away and look because their phone is not unlocking otherwise...
9
noncoml 15 hours ago 3 replies      
My understanding is that FaceID and TouchID are more of convenience features than security features, so I don't see what all the fuss is about.

Shouldn't one just disable both of them and use a long passlock code if they are serious about security?

10
thinbeige 15 hours ago 1 reply      
Very good interview. It shows that this feature is well thought-out and security-wise better than expected.
11
jafingi 5 hours ago 0 replies      
The thing I'm really looking forward to with Face ID is when apps are using it. Fine for unlocking the phone, but think about it.

Right now, if you access your bank, Dropbox, 1Password etc. you open the app, wait for the Touch ID dialog to appear, then move your finger from the screen (where it was) and to the Touch ID to authorize.

With Face ID; just open the app, and you're authorized. This is a HUGE thing in regards of usability.

I love it.

12
ssijak 4 hours ago 0 replies      
Security wise, maybe they could implement a feature where if you look at the phone with a recorded grimace (one eye closed, or blinking fast or raised eyebrows etc etc) it would disable faceid. So if someone wants to force you to unlock it you just comply and make a face
13
idlewords 14 hours ago 4 replies      
I trust Apple's security team to have implemented this in a way that preserves safety and privacy. Apple have shown themselves to be very good at that.

That said, I find Face ID extraordinarily troubling, because it normalizes the idea that your phone actively scans your facial features during use. Just like carrying around an always-on pocket beacon became part of the 'new normal' with the introduction of the smartphone, a phone that looks back at you during use will become part of the new normal, too.

When you combine this with business models that rely not just on advertising, but on promises to investors around novelty in advertising, and machine learning that has proven extremely effective at provoking user engagement, what you end up with is a mobile sensor that can read second-by-second facial expressions and adjust what is being shown in real time with great sophistication. All that's required is for a company to close the loop between facial sensor and server.

Apple is unlikely to be this company. But Google, Facebook and Amazon are. What I anticipate is the next generation of Home and Echo to have cameras (Amazon is already moving in this direction), along with whatever piece of hardware Facebook produces. The idea of devices that look back at you will gain acceptance, just like always-listening voice assistants have gained acceptance. All of these will become input sources to learning algorithms.

What is already an incredibly potent toolchain for political manipulation will become even more powerful, with no oversight, accountability, or even much understanding by those who built it on the way it can be profitably used and misused. Its effects will be field-tested in democratic elections that affect the lives of billions.

This is what Zeynep Tufekci has called the architecture for networked fascism, and by manufacturing a mass-market device with active facial scanning, with the best of intentions, Apple has moved us a big step further along this dismal road.

14
ksk 8 hours ago 0 replies      
It will be interesting to see what developers can do with the sensor, especially for AR. I wonder if you could stick two iphones together back to back and have one "see" the world in 3D and have the other augment it in some way.
15
mithr 15 hours ago 0 replies      
> Developers ... [are] given a depth map they can use for applications ... This can also be used in ARKit applications.

This has me very excited. ARKit already does an impressive job of tracking as you move around your environment, and the major thing that seems lacking is being able to map existing objects to know where they are in relation to virtual ones.

If they do this well, especially when it comes to automatically obfuscating virtual objects with real ones, it could allow ARKit to be used for creating much more immersive experiences.

16
sigmar 16 hours ago 4 replies      
The comments about the "attention" detail make it sound like they have tested this extensively with diverse groups. But I've wondered if they had anyone "red team" to see if they could fool it. Faces are less private than fingerprints, and I imagine the police would jump to buy a tool that can unlock phones. Statistics about the false-positive rate are mostly meaningless in an adversarial case. Maybe it is mentioned in the forthcoming white paper.
17
shams93 15 hours ago 3 replies      
Law enforcement don't have to guess which of your 10 fingers you used for touch id, they don't need your consent to handcuff you and then point the phone at your face to get into your personal info. With touch if they have to get you to do something you comply it's your fingers with this it seems much easier for authoritarians to abuse.
18
xz0r 7 hours ago 1 reply      
> If there are 5 failed attempts to Face ID, it will default back to passcode. (Federighi has confirmed that this is what happened in the demo onstage when he was asked for a passcode it tried to read the people setting the phones up on the podium.)

Well, on the demo onstage, he did not enter the passcode after being asked for it. He just said "Let me try that again". Is it just a cover up?

19
jpalomaki 14 hours ago 1 reply      
Can't help thinking Face ID is something Apple had to do, not something they really wanted.

Reading fingerprint with/through the touch screen [1] sounds a better alternative if they ever get it working well enough.

[1] https://www.google.fi/amp/s/www.theverge.com/platform/amp/20...

20
mthoms 16 hours ago 2 replies      
I'm wondering whether Apple sees this technology totally replacing TouchID, or if they will continue to offer it in the next 3-5 years on some devices?
21
late2part 13 hours ago 0 replies      
I was raised that MFA = something you have and something you know. Your face and your fingerprint are the former. They're not passwords.
22
kibwen 16 hours ago 5 replies      
Quoted bullet points FTA:

- If you havent used Face ID in 48 hours, or if youve just rebooted, it will ask for a passcode.

- If there are 5 failed attempts to Face ID, it will default back to passcode. (Federighi has confirmed that this is what happened in the demo onstage when he was asked for a passcode it tried to read the people setting the phones up on the podium.)

- Developers do not have access to raw sensor data from the Face ID array. Instead, theyre given a depth map they can use for applications like the Snap face filters shown onstage. This can also be used in ARKit applications.

- Youll also get a passcode request if you havent unlocked the phone using a passcode or at all in 6.5 days and if Face ID hasnt unlocked it in 4 hours.

Lots of people in the threads here yesterday seemed to misunderstand how these things are implemented. Importantly, you cannot set up FaceID without first setting up a password (biometrics are a carrot for getting users to set up passwords, not intended to subsume passwords), and you'll still be prompted for a passcode around once a week so you won't go forgetting it.

23
limeblack 13 hours ago 2 replies      
[removed]
24
dhanh 14 hours ago 4 replies      
Can Face ID feature still work in outdoor environment, specially under sunlight?
25
jccalhoun 14 hours ago 0 replies      
I wonder if they licensed any tech from Microsoft because faceid seems to work a lot like Windows Hello
26
fish_fan 15 hours ago 1 reply      
Wow, so the facial recognition is both used and trained in the secure enclave. Imagine a future where you could root your phone by showing it a picture of an artfully encoded payload printed on paper.
27
blackflame7000 14 hours ago 1 reply      
How did Apple get consent from a billion people to use their likeness in their training set of images? Is it possible to train a facial recognition software acurately using different photos of the same person VS different people all together?
28
pmontra 5 hours ago 1 reply      
I can unlock my phone with my fingerprints in the dark (example, at bed during the night). Do I have to turn on the light to use FaceID or do I have to type in a PIN instead? That would be progress /sarcasm.

I could also unlock my phone and my tablet with my face (don't remember how Sony and Samsung call that method) but it means that I have to point their cameras to me. Maybe it's me but their cameras tend to point to the ceiling when I'm holding them. Fingerprints are faster. Even the swipe pattern on the tablet is faster.

FaceID is probably more about the current impossibility of adding the fingerprint sensor to the bezelless and thin iPhone X. I expect Apple to solve that at the next iteration. FaceID would stay and be useful for some purposes, but people will be back at unlocking iPhones with their fingers.

29
randyrand 16 hours ago 3 replies      
How to hack FaceID:

You'll need:

1. 2 phones (at least 1 with an IR camera, such as another Iphone X)

2. a helper app

3. access to 10+ photos of the victim (Facebook typically)

4. a small mirror

With the helper app:

1. capture the suspect's phone's unique IR dot pattern by shining their phone at white piece of paper, recording it with the helper app (the helper phone needs an IR camera of course, such as another Iphone X)

2. makes 3d model of persons face from the FB pictures

3. generates 2 animated videos of their face, 1 just a normal color video and another with the dot pattern applied

4. now you need to show the 2 videos to FaceID, using the mirror to show the correct video the the corresponding camera

30
StanislavPetrov 15 hours ago 2 replies      
Its simply astounding that so many people are drinking the Apple kool-aid. The avalanche of complaints that will be unleashed after the release of this phone will be massive and sudden. If you think that this tech will work as flawlessly and without trouble as Apple claims, I have some bridges to sell you in Houston.
14
Patching is hard; so what? cryptographyengineering.com
100 points by runesoerensen  14 hours ago   49 comments top 10
1
chronid 3 hours ago 3 replies      
> I dont operate production systems, but I have helped to design a couple of them. So I understand something about the assumptions you make when building them.

Start by operating production systems. You will rapidly discover that patching is not a technical issue (as I already said last time there was a patching discussion on HN). Patching is technically "easy".

But if the business does not prioritize (and allocate time towards) patching (and testing that systems work with after) patching won't get done. There are features to develop and deadlines to meet and new systems to turn on and old systems to retire and oh-my-god don't touch those servers otherwise the vendor will not support the prod environment anymore (no longer certified, yay!) and it costs us 30000$ to get certified again.

Too many times I see HN assuming that everyone is running on a public cloud with B/G deployments and immutable infrastructure you can rebuild and redeploy easily. Unfortunately that's not the case.

2
alkonaut 31 minutes ago 0 replies      
It's pretty simple to patch if you designed the system to be patchable in the first place and assign enormous cost to not patching. If you assume running an outdated version of struts for 3months is a $1B expense then you'll patch even if it means disruption to business operations, or will breach some conract.

And if the system is properly designed then anything can be rolled back if it turns out there were unforeseen consequences.

3
chasb 12 hours ago 4 replies      
For teams that use a DevOps model, fast, predictable deploys that can be safely rolled back are important for security, for this reason.

If deploys are like playing Jenga on a sailboat, you're not going to be able to patch fast or safely.

That said, even becoming aware a CVE exists in the first place is still a problem for many teams. There are plenty of good options, it's just underinvested in early on.

4
sqldba 3 hours ago 0 replies      
Patching at an OS level (not what the author is talking about, probably) is extremely extremely hard - from an enterprise Microsoft platform production point of view

1) Because there's no budget for tools. It's mostly WSUS and SCCM if you're lucky.

2) Because staff don't even know how to use those tools properly. Can't blame them because there's very poor documentation from MS and very little accessible training on design patterns that integrate them into large corps.

3) It's outsourced. So it's even worse than above.

4) Because management fights it every step of the way. They hate patching because they're stuck in the 80s where they think every patch is deeply documented and you should only apply what's necessary. And you should only apply security - ignoring updates which give critical stability fixes.

5) Because the enterprise as a whole is so poorly documented it's unclear who owns what or if a rollback is possible and what it would affect. So everything is in CYA overdrive.

6) And then nobody working there wants to own it because it's not sexy to patch. (As a geek I think patching is sexy and very fulfilling work, but I'm in the extreme minority).

None of this excuses anything. But most largish companies are still in the 80s, with IT managers in the 80s. I can't wait for them all to die.

5
rythie 2 hours ago 1 reply      
Is it so hard though? Do Facebook, Google etc. have this problem too? I doubt it.

Systems need to be designed to be patched regularly from the outset. Yes, defense in depth can help, but it's not defense in depth if you one of the layers is pretty much always broken.

If they are relying on legacy systems that can't be proactively and regularly be patched, they shouldn't be holding that data.

I think part of the problem here, is that we can't see the risks companies are running with our data and goverment doesn't regulate it.

P.S. I do manage systems, though not on this scale.

6
empath75 12 hours ago 2 replies      
I know equifax is probably dealing with a hairy legacy system, but really you should design all your systems to be able to push out a fully tested blue/green deploy in hours not months. There's no excuse not to with Jenkins/aws/docker/etc.

You should be testing and pushing updates regularly as a matter of course, not as a red alert only when security vulnerabilities are published.

7
rdtsc 10 hours ago 2 replies      
> Specifically, these folks point out that patching is hard. The gist of these points is that you cant expect a major corporation to rapidly deploy something as complex

Then they should be taken to court and be forced to pay for it. It's like saying about a company which runs trains and didn't do maintenance which caused an accident to happen "Well, you see maintaining the brakes on the trains is too hard so don't blame them too much...".

> The gist of these points is that you cant expect a major corporation to rapidly deploy something as complex as a major framework patch across their production systems.

So those corporations will be hacked, and then they should be dragged to court, made to pay, some might even go bankrupt and disappear altogether. Eventually only the companies that manage to build better infrastructure will survive.

Oh well, one can dream...

8
guelo 11 hours ago 1 reply      
Patching isn't that hard. Once Equifax finally noticed the intrusion (intrusion detection system?), they took the system down and patched it within a day.
9
lucb1e 12 hours ago 0 replies      
TL;DR: patching is hard, so use defense in depth to minimize/mitigate the impact of individual components becoming insecure from time to time.
10
shawkinaw 10 hours ago 1 reply      
Maybe patching is hard. But the real question is why one vuln in their web framework allowed unfettered access to basically all of their data. There are much deeper problems here than slow patching.
16
Facebook to Open New Artificial Intelligence Lab in Montreal bloomberg.com
47 points by kimsk112  11 hours ago   11 comments top 4
1
alexasmyths 3 hours ago 2 replies      
Go Montreal!

But ...

It's just another way to transfer the exceptionalism of the local talent into the shareholders of big American companies.

I know a few in the AI-crew in Montreal and some local government types.

What they fail to grasp is that we need 'companies' that sell 'products' and 'services' and have 'revenue' - not just a bunch of shiny R&D.

A few AI jobs will actually have zero impact on the local economy.

Now - if some cushy, ridiculous 'social network' were based here, it would employ thousands, and have thousands of others supporting them in other ways, they'd be paying taxes here, and forming a hub.

Unfortunately, AI may not be the best hub for an industry - as plain AI doesn't generally make for good products. It's almost inherently a 'component' type technology. It's going to go into 'everything' but the best positioned to take advantage of it are those who already have leverage, i.e. Valley companies, finance etc..

Crossing my fingers for Mtl though.

2
mrmondo 1 hour ago 0 replies      
I dont want to make a negatively geared comment just for the sake of it, however (and theres often a however after such words) facebook and other platforms where the product is the you - the person deeply troubles me especially when said company invests heavily in data sciences that will likely be used to market products to or get more information from again - people.
3
hellofunk 4 hours ago 2 replies      
It would seem the rumors are true, Montreal is becoming a global headquarters for the golden age of artificial intelligence.
4
chiefalchemist 1 hour ago 1 reply      
How much of this is FB wanting to go to Montreal? And how much of this is FB anticipating (AI) regulation in the USA?
17
The Realities of Being a FOSS Maintainer caddy.community
153 points by jwcrux  9 hours ago   73 comments top 18
1
aorth 1 hour ago 0 replies      
> (Is anyone offended that you cant remove nginxs Server self-promotional header without an extra module?)

Wait a second. The Caddy developers implemented a premium "thank you" feature that embeds their sponsors' company names in the HTTP headers of the binary distribution[0], and then defends this by complaining that you can't disable nginx from simply printing a header that says "server: nginx"? It's not even remotely the same thing!

I think this was a very poor use of judgement. I don't even see why it's relevant that they asked their sponsors "do you want us to put your names in the HTTP headers of all the poor suckers running the binary builds?" There's a handful of other places that would be way more appropriate for spamming sponsors' names, for example: banner on the website, in the log files, on the download page, in the README, etc.

[0] https://github.com/mholt/caddy/pull/1866

2
SwellJoe 8 hours ago 2 replies      
When Caddy made their announcement earlier this week, I shook my head with pity at what they were about to endure. I could have given them a list of all of the shitty (and merely unhelpful) things people were going to say to them, because I've heard them all, too. There's a deep streak of meanness, and, as the post states, entitlement, in the OSS user community (much less so in the developer community, which doesn't overlap as much as it once did, though sometimes developers do it, too).

And, the really awful thing is that if the project continues, it'll never stop. Every once in a while someone or some group will get a bee in their bonnet about the thing not being open enough, or the authors not following or not understanding the license (as though the copyright holder needs a license), charging too much for the one little thing they reserve for paying customers (no matter how small that amount is...we charge as little as $6/month, and we still get complaints about price).

And, there's always a huge misconception that because an OSS project has a lot of users it must be making a lot of money, especially among the people who won't tolerate any action that would actually make money for the project.

What I'm trying to say is that OSS is a hard-as-hell way to make a living (I've done it for my entire professional life, over two decades now), and there's gonna be a handful of users who will make it unpleasant. I love it when I can build some one-off thing and just throw it over the wall onto Github and never think about it again...where I can take a "use it or don't, but don't ask for my time" attitude. Making a business out of an OSS project makes it harder to select good users (who become good customers), though you have to in order to survive.

3
rmrfrmrf 6 hours ago 1 reply      
I think the main problem is that the original announcement read more like a ransom note than a press release.

IMO the author doesn't realize how much price factors into tech stack decisions, or how the majority of his users are developers who now have the new responsibility of creating business cases for a piece of software (or scramble to find a replacement) that need to go through approval, or the projects that now have to be delayed to switch around one framework on the back end (I can just see the looks of manager/PM faces everywhere when they get the news from one of their devs).

The tone is another big issue. Flippant phrases like "what's new for personal edition users? Not much" isn't going to get any laughs after talking about mandatory payment. Other times, the disdain for users seeps from the wording (like the passive-aggressive "reminder" that internal apps constitute commercial use).

I'd really encourage the author to go back and re-read that announcement as a user who has taken a gamble on your software and is now invested in it. Yes, this might be a completely necessary move for the survival of Caddy. The necessity, though, doesn't mean you can skip the formalities. Users still need to be sold on the idea.

4
fusiongyro 7 hours ago 3 replies      
Of course you're going to hear supportive friendly things from your friends. It's much harder to tell someone a hard truth if you see them as your friend and want the best for them.

Here's a blunt truth. If it's easy for me to ditch your product, your product has no value. 90% of the comments here on HN amounted to "gee, I guess I have to spend an extra 20 minutes configuring Nginx now." Solving 20 minutes of Nginx configuration is not a viable business. System configuration is handled by some Puppet library author. Who's going to spend $100 on your product so they don't have to spend $20 on a non-free SSL certificate?

You're friends with the RethinkDB developers? The moral of their story was even if you have a compelling product, even if you have an interesting niche, even if you give it away for free, even if you give away the source code, even if you have features nobody else has, if you are entering a saturated market you are facing a massive, time-consuming, expensive up-hill battle. Moving away from RethinkDB to Postgres is hard. There are things RethinkDB does that Postgres just doesn't do. But people did this anyway instead of paying, instead of contributing.

Moving away from Caddy is simple. Your product barely even exists. That's why you're having trouble making money. Solve a real problem and you'll make real money.

5
Sir_Cmpwn 2 hours ago 3 replies      
Oh, boo-hoo. I maintain a bunch of FOSS projects too and I endure entitlement too, and I yet was still there calling you out for this. What Caddy has done is deliberately mislead and disrespect both new and established users. Other groups have added commercial licensing options to their software without facing (as much) vitrol because they didn't mislead or disrespect people while doing so. Don't think Caddy is in the same league as them. You got hate because you did it wrong, not because you did it at all. Your post is mining for sympathy because you got hate for being a jerk and it demonstrates that you've learned little from the event.

You may not owe your users anything, but guess what: they don't owe you anything either. That's how open source works.

6
chriscappuccio 11 minutes ago 0 replies      
Every popular project gets this kind of crap. The best response is to push back on these entitled fools.

I don't see how this is a better business model for Caddy than donations (which didn't work.) This is still basically a donation model, since they are not creating a separate closed-source edition. This is a way to automatically solicit for donations, making them look more like licenses.

Why is a new web server so important that there were going to be commercial users paying for it over established alternatives. When I saw that it was written in Go and replete with the latest buzzwords, I realized that this was written for ninjas. The intersection of ninjas plus commercial users may be too small for a business. (I hope I'm wrong for the author's sake)

7
strken 1 hour ago 1 reply      
As someone who was going to switch a few personal projects to Caddy this weekend, and who read through both the initial announcement and this, I am now massively confused by the whole thing and have no idea what users are actually allowed to do.

Is "sudo apt-get install caddy" allowed? Does header advertising mean "Server: caddy" or "X-Advertisement: go to www.malicious.com for camgirls"? Is this indicative of a broader move to stop supporting the open source version? If I had previously installed Caddy through a package manager that grabbed the official binaries, and I updated my system, would I now be breaching an EULA that I never saw? And so on.

Obviously I am not entitled to get anything for free and don't use the software anyway, and I can probably find answers after 10 minutes on the Caddy website, but this seems like a communications issue that might have made people angry - particularly if they thought they would have to immediately migrate back to nginx with zero notice.

8
zbentley 8 hours ago 0 replies      
This was pleasantly nuanced and comprehensive. The gamut of reactions to license changes is always a fascinating roller coaster.

The "most maintainers have very little to lose" point is one that isn't emphasized often enough. For every famous, high-profile FOSS developer who gives big swaths of their life to their projects, there are dozens or hundreds of people who are: a) contributing stuff they build for their job that they convinced the company to open source (hopefully out of good will, and not out of "maybe community PRs can fix our garbage fire") b) contributing code they wrote while learning something new for personal or professional enrichment, or c) just doing it as a fun activity.

The give-your-life-to-a-project folks are important, and prolific. Hats off to them. But so are all those other contributors. They might write less code than the lifers, but their code isn't any worse because they spend fewer hours per week developing it; it just takes longer to get written. And there are a lot more "casual" FOSS developers than lifers. Many FOSS projects started out in that "casual" realm before they became household names today. Sure, a lot of people think some of those casual projects aren't great, but that's not unique to small FOSS projects (cf. systemd).

All this is to say: be careful when communicating, especially negatively, with FOSS maintainers. If ethics and basic decency isn't reason enough to treat your fellow humans with compassion, try self-interest: a lot of these people are so un-invested in the projects you depend on for your {fun|living|freedom from the soulless void of the blinking cursor in an empty terminal} that your "How could you be so stupid?" or "Guess this project has gone to hell" comment on their GitHub PR might be enough to make them abandon it entirely.

Edit[s]: I accidentally some words and punctuation marks.

9
edwhitesell 5 minutes ago 0 replies      
I find it amusuing when I tried to load the site I get a 429 "Too many requests" nginx error.
10
ploggingdev 6 hours ago 0 replies      
I think a significant portion of the hate could have been avoided if mholt made it clear in "The High-Order Bits" section of the original blog post that Caddy is still free to use for commercial purposes if you build from source and also explain that it's possible to remove the response headers. That lack of clarity can be explained by the need to push businesses to pay for a license. This move is often perceived as a dark pattern among hackers who hate it with a passion, as evident in the previous post's HN comments.

Having said that, what was particularly disturbing was the amount of hate towards the Caddy team. It's best to explain why think a certain change was "bad"/not in the your best interest and maybe offer suggestions on how to balance the need to build a business without alienating the community that supported the project in the early days. Outright vitriol is not a productive way to further your cause.

I guess building a business out of FOSS is as hard as ever.

11
JepZ 3 hours ago 0 replies      
Well, I barely knew Caddy (had to google it again), but after reading the "Ugly" section I am sure I will not become a caddy user anytime soon.

I mean we all know how brutal (emotionally) online communities can be and I respect anybody who stands against it. Nevertheless, I do not think that 'Entitlement' and 'Emotional Manipulation' are the real problems here, because those are just honest expressions of what other people think. Those comments are some kind of feedback which the maintainer can accept or ignore.

On the other hand, there are a lot of dicussions where we have personal assaults, like 'the maintainer is a prick' or 'what a dumb move'. Those are totally unacceptable and the people discussion should show what they think about such comments.

And in the end, HN has a very heterogenous group of commenters, but Paul Graham gave us a guide on how to write good comments: http://www.paulgraham.com/disagree.html

12
quickben 6 hours ago 1 reply      
It depends how you look at it:1. I would contribute to GCC because it's still "free". 2. I would have contributed to Caddy, but now they are half commercial, so I'm disinclined to provide my "free" to their commercial success.

Most decent people that I know would calmly do the same.

Most leeching off Caddy will send the vitriol to the author. But, they are not his community, now they are his customers.

13
throwaway0071 1 hour ago 0 replies      
Caddy's mholt holds some similarities to Docker's shykes in that both lack business and open source acumen.
14
sah2ed 5 hours ago 0 replies      
> Comments started rolling in, but I had class until almost noon. With me in class and Cory at his day job, it was impossible to coordinate any responses until much later that day.

The quote above clearly shows a huge oversight -- there were no plans in place to block out time to do PR after the announcement. Better to schedule the announcement to coincide with a period when the co-founders would be able clarify the public's perception of the changes. Even for OSS projects, marketing is important. But that's not even the underlying issue.

There is a subtle but very important observation that the OP's article didn't touch and I think will help illuminate why the seemly small changes to Caddy lead to the outburst of entitlement and vitriol directed at Matt and Cory.

Humans are naturally loss averse[0] this is why there is an enormous difference between marketing copy that says "a $5 discount" versus saying "avoid a $5 surcharge". The original announcement[2] was framed[1] like a loss -- existing users should prepare to deal with previously non-existent advertising of Caddy's sponsors -- causing the human instinct of loss aversion to kick in in full force.

This will always happen whenever you switch your user/customer interaction from social to market norms [3] which is essentially what it means to monetize an OSS project. Better to keep a free product unchanged then create a separate product targeted at commercial users [4] to avoid alienating your free users, or incurring their wrath.

Engineers are generally skeptical of marketing but this is one of those situations when good marketing would have helped to put out fires.

[0] https://en.wikipedia.org/wiki/Loss_aversion

[1] https://en.wikipedia.org/wiki/Framing_effect_(psychology)

[2] https://caddyserver.com/blog/accouncing-caddy-commercial-lic...

[3] https://www.technologyreview.com/s/419923/social-vs-market-n...

[4] https://en.wikipedia.org/wiki/Product_differentiation

15
jim-greer 7 hours ago 0 replies      
This is a great piece.

I'll quibble with this bit though:

> ...toxicity festers in open source because its all too common for forks to ground their motivation in emnity towards other projects.

This is wishful thinking, for the most part. Communities don't break off because things are going well. The American Revolution wouldn't have forked if King George had been sympathetic to their needs.

16
z3t4 5 hours ago 0 replies      
Software licensing is damn hard. When it come to certain tools people wont use them if they are not open source or don't get active maintenance. But they are also willing to pay for it.
17
yjftsjthsd-h 8 hours ago 2 replies      
In some ways, I agree with him; he doesn't owe anybody anything, it's fine to make money on his software, etc. The whole thing still rubs me the wrong way, and I think some of the presented arguments are, to say the least, biased.

>> But then I have to build from source to get what I want. > Yes thats the point. Welcome to open source.

> I do find it ironic that the open source community is so irate about having to compile software from source to customize it the way they want.

I trust the author is using LFS or Gentoo, and is personally building his Go toolchain and not using any Google-provided binaries?

>> So I have to pay to remove ads from my web server.

> This was one of the biggest misconceptions.

Half-credit, you're both wrong. Users could pay or build themselves to avoid ads. So to a hobbyist who was using Caddy specifically because of the tiny learning curve, this is not a misconception. Thankfully this is a mostly moot point since they took the ads back out.

EDIT: Fixed formatting. One day, I will post and not have to fix newlines...

18
RomanPushkin 8 hours ago 1 reply      
FYI, there is no FOSS -- that expression is a misunderstanding. There is the free (libre) software movement, and there is the open source non-movement: two different viewpoints based on different values.

See https://gnu.org/philosophy/open-source-misses-the-point.html for more explanation of the difference between free software and open source. See also https://thebaffler.com/salvos/the-meme-hustler for Evgeny Morozov's article on the same point.

Regarding the term "FOSS", see https://gnu.org/philosophy/floss-and-foss.html

18
How do you find the positive integer solutions to x/(y+z) and y/(z+x) and z/(x+y)=4? quora.com
5 points by d0mine  57 minutes ago   1 comment top
1
yorwba 9 minutes ago 0 replies      
Previous discussion (with non-mangled title): https://news.ycombinator.com/item?id=14943528
19
Google Has Spent Over $1.1B on Self-Driving Tech ieee.org
103 points by mcspecter  15 hours ago   75 comments top 21
1
tuna-piano 14 hours ago 6 replies      
I think by most accounts -

1) Google / Waymo is by far the furthest ahead in self-driving tech.

2) Self-driving is possible in at least certain circumstances in the next decade or two (already is to an extent today).

3) Transportation / automobiles is an absolutely massive industry. Even companies like Dana, Inc (maker of axles) have $4bil valuations. It's easy to see how a self-driving tech maker would be worth far more than an axle maker.

4) Software tends to be a winner-take-most industry.

Given 1-4, $1.1B doesn't seem like much. Even if by chance it doesn't turn out to be a positive ROI, it seems crazy to think the $1.1B was a bad bet.

And given current prices for self-driving startups (example, Gm's acquisition of Cruise for >$1B), the current market value for Waymo would likely be >$10B.

2
siscia 15 minutes ago 0 replies      
But am I the only one thinking that maybe we are trying to solve the wrong kind of problem?

If we had spent the same amount of money into creating a grid of mini-train I believe that most cities would already have some working prototype...

And mini-train is a control problem that we could solve yesterday...

(With mini train I mean something roughly big as a car but that is constrained to move in some predefined path, it will means that they could move way faster than cars and be more space efficients...)

3
Fricken 15 hours ago 1 reply      
I wonder if this includes the 'Fuck you money' engineers on project Chauffeur received for passing certain milestones. Earlier in the depositions it was disclosed that Levandowski was paid a 120 million dollar bonus, though it wasn't revealed how much was given to others on the project.

https://www.bloomberg.com/news/articles/2017-02-13/one-reaso...

4
cmarschner 4 hours ago 1 reply      
The top German car suppliers and car makers have accrued 10x the patents on autonomous driving Google has [0]. Bosch alone has 3x the number of patents. Yet hardly anybody talks about Bosch and Continental and Mercedes/Audi/BMW etc.

[0] http://www.businessinsider.in/Whos-in-the-lead-in-developing...

5
lpolovets 8 hours ago 0 replies      
To put this in context, $1.1b is about 3 weeks of operating profit for Google. Considering the upside for Google if their experiment pays off, this seems like a pretty smart bet.
6
siddarthd2919 14 hours ago 3 replies      
It is not really that much! 1.1 B in 6 years on a moonshot project.
7
coldtea 14 hours ago 1 reply      
Don't know about how it's in the US, but if it's anything like in some European countries, a question is:

How much of that $1.1 are recouped as "investments in research" etc. and thus contributing to tax-deductions (and in some cases even subsidies)?

8
davesque 12 hours ago 1 reply      
Can anyone put this figure in context? I'm inclined to imagine that $1.1B is chicken feed for Google.
9
Animats 8 hours ago 0 replies      
Probably more since the end of 2015, since they've been increasing the number of vehicles.
10
adventured 14 hours ago 2 replies      
$94 billion in cash burning a hole in their pocket. $20x billion in new cash pouring in annually, expanding by 10%-20% per year.

In another two years they'll be net cash richer than Apple is (as unlike Apple they pay no dividend and have almost entirely avoided taking on debt).

There aren't very many large companies they can buy with that money, due to perpetually increasing anti-trust concerns. I suppose they could maul ~$25 billion buying Snap and probably get away with it due to Facebook (but comeon).

So what to do? Burn $1.1 billion on self-driving tech. Will it pan out? Doesn't really matter. Their search monopoly isn't going anywhere near-term (probably) and they'll have $200 billion in net cash in another 4 or 5 years. They could vaporize $11.1 billion on self-driving tech in the next couple of years and it would not matter, either to shareholders (oh, some would pretend to be upset) or to their operations.

11
slewis 12 hours ago 0 replies      
They've likely spent way more than that, since the article's number is only through the end of 2015:

"Between Project Chauffeurs inception in 2009 and the end of 2015, Google spent $1.1 billion on developing its self-driving software and hardware"

12
mempko 13 hours ago 0 replies      
Only $1B? Kind of highlights that they are not that serious yet about the technology yet.
13
amaks 15 hours ago 0 replies      
And $120 million of that amount got paid to Anthony Levandovski?
14
bukgoogle 2 hours ago 0 replies      
you really trust your family to be in car that doesn't have a steeringwheel and it's controlled by "don't be evil" google ???

Seriously people, wakeup!

15
simonsarris 12 hours ago 4 replies      
Serious question: Why not acquire Tesla and position yourself leaps and bounds ahead of the other tech giants?
16
asdfologist 11 hours ago 0 replies      
And over 10% of it went to Anthony Levandowski.
17
anigbrowl 11 hours ago 0 replies      
It's a lot and yet it somehow seems cheap at the price. DAROA seems to invest on a similar scale.
18
0xbear 12 hours ago 0 replies      
Meh. Less than Hillary's presidential campaign.
19
known 8 hours ago 0 replies      
And Uber won over Google's self-driving tech;
20
dingo_bat 7 hours ago 0 replies      
With literally nothing to show for it except litigation with Uber.
21
userbinator 12 hours ago 1 reply      
Google already has a huge amount of power over where people go on the Internet. It seems having power, however indirect, of where people go in the real world is their next goal, and it is quite deeply unsettling to consider. Imagine self-driving cars that will be cheaper but take you to "sponsored places" before going to your destination, refuse certain destinations "for your safety", and play ads continuously while you're helplessly transported around.

Given that amount of power they will have if they succeed, spending $1B on the (very real) possibility of getting to that situation is definitely expected. And very very scary for everyone else.

20
Firefox Multi-Account Containers blog.mozilla.org
770 points by nachtigall  1 day ago   214 comments top 51
1
ff_ 22 hours ago 9 replies      
I LOVE this feature, but it has only one problem: when I'm in a container and I press Ctrl+T (new tab), the new tab opens in the default container. This doesn't make sense, I want it to stay in the same container.

This was also discussed in the issue tracker, in a now closed issue, in which the intuitive behaviour (staying in the same container) was proposed, but got sidetracked and in the end implemented something totally different.

So if anyone from Firefox is listening here: please PLEASE consider implementing Ctrl+T in the same container :)

2
nicoburns 1 day ago 7 replies      
This is incredibly useful. It's basically like Chrome's 'profiles', except per-tab rather than per-window. So I can now have my personal gmail, my work gmail, and the 3rd gmail account for a client set next to each other, and colour coded.

This, along with the speed improvements (both the UI and content processes) in Firefox 55 have made it my default browser for the first time since Chrome was released.

3
notheguyouthink 23 hours ago 4 replies      
Cool!

As an aside, I've been migrating away from Chrome for a while - and I posted here a while back being dismayed by how terrible Firefox was, how slow/etc it was, etc. Many people suggested I switch to nightly.

Nightly is .. a night and day experience. I've been fully switched from Chrome now, thanks to Firefox. Note that on OSX I've had no complaints with Safari as my Chrome replacement, so I've stuck with them - but on windows it's all Firefox.

Keep up the great work guys, the new stuff is amazing. Hope you can push it to stable branch soon for people. :)

4
wslh 23 hours ago 2 replies      
For historical reasons I cannot avoid to mention that I achieved this with my Cookiepie extension 11 years ago (a billon Internet years ago indeed): https://youtu.be/2Pfg-kJ4nAw Cookiepie was written only in JavaScript and was very hackish because Firefox APIs didn't have a way to correlate network requests with the tab in the UI, so I traversed network and UI objects recursively to find unknown relationships between them. It was very difficult to support because even minor Firefox releases broke it.

I even posted my Cookiepie extension for the first Firefox extension contest [1] and there was no prize or mention for it.

[1] https://blog.mozilla.org/press/2006/03/mozilla-announces-win...

5
raimue 23 hours ago 1 reply      
I am a long-time user of the deprecated Multifox extension and I have switched to the Firefox containers ever since they have been introduced to the stable releases about a year ago. This feature is actually builtin in Firefox, you only need to change some config settings to enable the UI (probably this is also all what the linked extension does?).

As Multifox was one of the old XUL/XPCOM extensions, I am glad that this functionality was integrated natively before Firefox 57 will disable all extensions that are not WebExtensions.

It is a great way to login to multiple accounts on various sites such as Twitter, without going through the hassle of a full logout/login cycle. You can use the accounts side-by-side in different tabs, which will be color coded to indicate which container they belong to.

More details can be found on the Mozilla wiki:https://wiki.mozilla.org/Security/Contextual_Identity_Projec...

6
ihateneckbeards 23 hours ago 5 replies      
That's great, I hate when Youtube recommends me all kinds of videos about turtles just because I stumbled over a video of turtle 6 months ago
7
timthelion 2 hours ago 0 replies      
I love this! I tried to do this by running multiple instances of firefox in separate Docker containers using software I wrote for the task (see subuser.org), and while it works, for more than a few different accounts it gets slow to switch between them because my system won't keep all of the instances of firefox in memory.
8
mey 22 hours ago 2 replies      
I use Chrome profiles heavily, so I am very happy Firefox is exploring this feature. When doing consulting, I like to keep different client activities isolated to their own profile, so I have less things to juggle if they use the same cloud service (AWS, G Suite, Jira, etc).

One limitation I currently see to that workflow (that works better for me in Chrome) is that this appears to all reside under a single Firefox Account which essentially creates master set of data to Sync. I would like to be able to setup Containers to be pegged to different Firefox Accounts (or not at all).

9
Jeaye 17 hours ago 1 reply      
I was in the test pilot for this and I had one singular gripe which I don't think has been addressed or brought up anywhere else I've seen: I want to be able to move a tab from one container to another.

It's so easy to open a tab in the default container, or the wrong container, and being able to move that tab, along with all the data it has spawned (like cookies) would make this a killer feature for me.

The only other thing, which admittedly makes my one singular gripe less singular, is that I didn't see any separation in the history, as far as what was in a given container. In an ideal world, each container would have its own "Show all history" data.

10
SubiculumCode 17 hours ago 1 reply      
I've been using containers in test pilot for about a month and I love it. Google is separated from my news reading, is separated from my banking, is separated from my shopping, is separated from my leisure activity, is separated from my work activity. Once you set it up to aways open a site within a designated container it is all smooth.

However, I wonder. What is the technical reason for not making it default to 1 container by site? Sure that would mean hundreds of containers...but does that pose performance problems?

11
kodablah 22 hours ago 2 replies      
I am building my own Chromium-based browser with a similar concept called "bubbles" [0][1]. Not doing a show-hn on it until next week because I need to build another release but feel free to try it out (I recommend building over using the one in the releases area as a lot of bugs have been fixed in master).

Oh and for the commenter wanting Ctrl+T in the same container, a Ctrl+Shift+T in Doogie does open a child page in the same bubble.

0 - https://github.com/cretz/doogie

1 - https://cretz.github.io/doogie/guide/bubble

12
emerongi 22 hours ago 1 reply      
This has been extremely useful for the past month or so that I've been using it. I separate work accounts and personal accounts and that has tremendously simplified using the browser. For Youtube, I can use a different Google account without logging out of my main one. I call it my "Entertainment" container - maybe it will also make it harder for agencies to connect my leisure activities to other activities.

I even have a "Testing" container when I'm testing a webapp and need to log in with 2 different users in the same window. Very convenient.

13
piyush_soni 9 hours ago 1 reply      
Not sure. I've been using separate "Work" and "Personal" profiles at my workplace, and can beautifully have two Firefox sync accounts syncing their own stuff. This limitation of the new Container extension prevents me from using it as I'd never want my personal and work set of extensions, bookmarks, and history to mingle with each other. Big oversight.
14
gangstead 21 hours ago 2 replies      
I've been wanting something like this for Android / ios.

I've had the problem that many restaurant rewards program have gone from "10 punches on this card and your next sandwich is free" to "type in your phone number / scan this card" on each visit and have now become "install our app" to get that free sandwich. That's more than I'm willing to give up for a cheap meal once every few months.

15
josefresco 22 hours ago 0 replies      
I implement "containers" simply by using different browsers (one for each screen). Chrome runs my (Google) email, calendar, drive. And then I use Firefox for my client work, where I log in/out of various client identities. I have Firefox set to "nuke" all session data on close - an absolute must-have feature for testing caching issues and making sure I don't end up with "hidden" active sessions around the web.
16
newscracker 22 hours ago 0 replies      
I've used this from the days of it [1] being in Test Pilot (an add-on for experimental features) [2] and really loved the idea. Usually I'd use a couple of different browsers or shuttle between normal and private browsing/incognito modes for using multiple logins on services (from a privacy standpoint, I don't like linking accounts together on any service, like for example how Google allows users to do).

I did provide feedback to the developers on the following:

1. Opening new tabs should have better intelligence about which container a user wants to go with.

2. Improving the look of the tab bar for better tab visibility and clarity on which tab was the current one.

3. Detailed and clear documentation on how containers work across normal windows and private windows, because I certainly wouldn't want to use something believing that it's providing me isolation while it does not in certain scenarios. In my limited knowledge, the behavior of different browsers, in keeping cookies/storage isolated, in private/inprivate/incognito mode varies when it comes to multiple windows, multiple tabs and closing windows/tabs. That is already not clear enough (to me) that I don't open more than one private/inprivate/incognito window at the same time.

I would love for this to get into Firefox main instead of being an extension!

[1]: https://testpilot.firefox.com/experiments/containers

[2]: https://testpilot.firefox.com/

17
provemewrong 20 hours ago 0 replies      
Another fan of containers. I switched to Chromium/Safari a long time ago, but installed Nightly 57 the other day out of curiosity, and containers is definitely the best feature in it. Only thing I would love even more would be a private/incognito container (or basically private tabs alongside regular tabs without the need for opening a private new window).
18
dexzod 19 hours ago 2 replies      
The article says: "online trackers cant easily connect the browsing", which seems to imply that they can still connect the browsing. Why can't they be completely prevented from tracking other browsing. The second question I have is how is this different from Firefox profiles?
19
rb666 11 hours ago 1 reply      
Great feature! I switched to FF Nightly some months ago, and I can confirm the performance is great. Sadly I had to switch back to Chrome, the quality of extensions in Chrome is just much higher.

Now, I only just learned recently that in theory you can use Chrome extensions in Firefox, does this actually work well? Or just so-so.

20
nothrabannosir 23 hours ago 8 replies      
I've been using this for over a month now, and while I'm convinced it's the right idea, the implementation leaves much to be desired. Currently, it costs more effort than it's worth.

[EDIT: comments show this does exist! great]Missing: easy way to open a new tab in a specific profile. ctrl-T always opens in Default profile, not the one you're on. So have to go File menu -> New tab -> select profile. And that menu changes items around slightly, so no muscle memory. I end up going to a tab already open, middle clicking a random link, ctrl-L, and using that as a fresh tab. I see on their little drawings they show some cool drop down under the + button at the right of the tab row, but I can't find any such functionality.

[EDIT: Comments show exists. Good enough!]Missing: a way to fix certain hosts to certain profiles. E.g. {XXX.myclient.com -> always open in "Client X" tab}. E.g. with links from GitHub (which is client independent) into custom CIs (jenkins etc). You forget, "why isn't this logged in? oh, profiles", go back, right click the link, open in new container -> select container. Ugh.

Missing: a way to disallow any non-whitelisted hosts from a tab. E.g. having a gmail tab is useless, because every link you click will open in that profile (and you won't notice because hey, it works) and now your gmail credentials and cookies are available there. Again defeats the purpose. Especially for a "Banking" tab, for example.

Missing: clear warning that this doesn't do anything meaningful against tracking. It's a complete waste of time to separate your Facebook into a separate profile if you don't want to be tracked across other domains. Fingerprinting goes well beyond cookies. They don't need your account cookie to link your visits.

Missing: segmentation of plugins!! Different NoScript or block settings per profile? yes please! Or even just native Firefox settings (3rd party cookies, clearing policy, etc) per website per profile would be lovely.

All in all: I'm stubborn so I'll keep using it, but I'll be honest: there's quite a low ROI on them, as they are. Good start, hope they improve.

EDIT: Another missing: clear cookies only from a certain profile. E.g. discover I've accidentally been browsing youtube in work profile (or whatever), I want to delete all youtube cookies _but only from that profile_. Can't do it. I encounter this problem often with GMail, where I want to clear a friend's login but not log out all my sessions from different containers.

(PS: Sorry for using "profile" and "container" interchangeably---it was a bit stream of consciousness. I mean "container" for both words).

21
feanaro 20 hours ago 3 replies      
I wonder if there is any reason each tab isn't spawned in its own container. It seems the natural thing to do once you have implemented this, since it maximizes privacy. Unless the resource usage is the limiting factor, I don't see a downside. Am I missing something?
22
drvortex 15 hours ago 1 reply      
So this basically replicates the functionality provided by https://sessionbox.io/

Is multi-process here already?

23
ComodoHacker 22 hours ago 1 reply      
I doubt this will help much with privacy. People's laziness plus cognitive effort needed to track what container you are in plus various tricks from advertisers and publishers will keep vast majority of users perfectly trackable.

Chrome's approach at least helps to keep multiple profiles visually separate.

24
tjoff 18 hours ago 0 replies      
I really like this, but why for everything that is holy does it not work in Private Windows?

I use Private mode a lot and it doesn't really make sense to group everything together just because it is "Private".

25
maxerickson 23 hours ago 0 replies      
Can any browser historian explain why the original models of cookie sharing weren't more like this?

I figure it comes down to some combination of lack of consideration and performance concerns, but that is just speculation.

I suppose restricted cookie sharing is also a lot more complicated for the user.

26
wutwutwutwut 23 hours ago 1 reply      
I just want to be able to open a new tab in a brand new container. Similar to File->New Session in Internet Explorer.

If I want to test my web app with say 4 different identities then figuring out which container is "free" becomes cumbersome.

27
jqs79 21 hours ago 2 replies      
How does this compare with using multiple profiles and the -no-remote flag? Does this manage only cookies, or does it also separate local storage (HTML5 session/local/global/web sql database), webcache, window.name caching (if the same tab can use multiple profiles), web history, flash cookies, for those who still have flash installed, etc.

People might get a false sense of security if all of these methods of saving data in the browser are not also separated along with cookies.

28
DoubleMalt 21 hours ago 1 reply      
That might bring make me bring back part of my browsing to Firefox. The identities functionality was what made me use Chrome almost exclusively the last two years.
29
mderazon 17 hours ago 1 reply      
I like this but I like chrome multi profile model better.

I like that extensions are separate for each profile.

For example, I have two separate LastPass accounts. One for work and a personal one. There is no way for me to keep them separate like this.

30
matt_f 9 hours ago 0 replies      
Can anyone from Mozilla here explain what this is written in, or how it works?

"Containers" makes me immediately think of Docker-like containerized applications, which I suspect is not actually the case here.

31
gaius_baltar 19 hours ago 0 replies      
This is great! I have been emulating this feature for years with multiple profiles, but let's get things right: it takes some work and it is hard to teach non-techie folks how to do the same.

Time to test the thing.

32
thinbeige 15 hours ago 2 replies      
This is my most wanted feature for a browser. Anyone knows if every instance has not just a different cookies set but also different canvas fingerprints?
33
stevenhubertron 20 hours ago 2 replies      
I would swap to Firefox in a second if I could live with their font renderings. It's just so different than what I am used to in Chrome/Safari/Opera and for some reason its really hard for me to read.
34
the_common_man 13 hours ago 0 replies      
Press and hold in the '+' button shows the container menu. Took we a while to figure this out.
35
amelius 22 hours ago 0 replies      
Meanwhile, the websites I visit are tracking me across containers :(
36
pbreit 19 hours ago 0 replies      
This is the one feature that keeps me from using Safari.
37
witten 19 hours ago 1 reply      
Dumb question: Does the implementation have anything to do with containers? E.g., Docker? Or is this just overloading an existing industry term?
38
35673567 21 hours ago 0 replies      
Love that they have decided to add this feature officially. I was using using sandboxed tabs -> Priv8 for years so that I dont have to be logged in to facebook and old emails globally.

The one thing I miss over the old plugins is the ability to set home pages per profile, which I know doesnt really fit in with the new tab ethos of default Firefox, but I would love a plugin to be able to add the functionality back.

39
wakkaflokka 1 day ago 1 reply      
I'm on the nightly and it says the extension is not compatible with my version of Firefox. Is this only for <56?
40
Tajnymag 20 hours ago 1 reply      
Do these separate history as well?
41
lucaspottersky 21 hours ago 0 replies      
this is _AWESOME_!!!

maybe the best feature since HTML5 has gone mainstream!

i'm so tired of using Icognito Window for that!

42
1024core 22 hours ago 0 replies      
> and online trackers cant easily connect the browsing.

For what definition of "easily" ?

43
MadWombat 16 hours ago 1 reply      
But is it going to work once FF drops add-on support?
44
ams6110 21 hours ago 2 replies      
Firefox has supported multiple profiles for a long time. How is this better?
45
teekert 23 hours ago 1 reply      
Huh? Did I just add this and got a "Firefox screenshots" icon with it?
46
lasermike026 20 hours ago 0 replies      
Awesome! I wish the UI was a little tighter.
47
anovikov 23 hours ago 0 replies      
Will be cool for many Upwork account brokers
48
esaym 20 hours ago 1 reply      
Is the "initial" version really "4.0.1"?
49
nasredin 23 hours ago 1 reply      
50
akerro 23 hours ago 2 replies      
How did they know I have multiple personalities? Are they watching me?
51
doe88 22 hours ago 0 replies      
Just a word of caution, anecdotely I installed the Container extension/feature 2 weeks ago when this was discuted on HN, I opened some tabs in different contexts, copied important links I wanted to keep, then decided to hide them, then finally yesterday I wanted to read one of these links, I go look in the menu... Pouf gone, all my links gone... Least to say I was happy.. Therefore not only I have uninstalled this feature but also Test Pilot altogether. I decided from now on to keep things simple because it seems this is only what really works. Maybe I'm rambling a bit, but the sad truth is I don't have much trust in Firefox anymore, I use it because it is to me the least worst browser, not because I really enjoy using it.
21
Apple Park employees revolt over having to work in open-plan offices dezeen.com
51 points by merraksh  1 hour ago   20 comments top 10
1
noir_lord 42 minutes ago 1 reply      
I'm moving my office at work from a nice office next to the production office (noisy, people banging in and out all day) to a much bigger windowless space on the other site across the road just because of the noise.

Boss keeps asking if I'm OK with no windows and I keep saying "Have you ever seen me with the blinds open?", It's amazing how even good managers don't get that for some of us a cave with fast internet is the best environment.

2
tomohawk 1 hour ago 0 replies      
Such a failure to learn. The US school system went through this with "open classrooms". Schools were built to support the idea of no walls between classrooms. For years, teachers trying to maintain sanity erected temporary dividers. These days, most of those schools have been rehabbed to include actual walls, or they've been demo'd. Such a waste.
3
heisenbit 47 minutes ago 1 reply      
Steve Jobs is said to have poured in a lot of thought on how the architecture of the building is supporting work. Steve also had some architectural experience from Pixar. On the macro level the alignment of the architecture with the organization as it was explained always made sense to me.

On the micro level changing from offices to open floor is a huge change. How was that supposed to preserve and foster the engineering culture? Apple is the company where walls and secrets are not accident but carefully designed. I could imagine a part of the organization going to open floor but the whole building?

4
Spooky23 46 minutes ago 0 replies      
I have a really bad feeling about Apple and this building.

Companies tend to have bad things happen when they build or move HQ anyway, and the fetishization of Apple Park is a level beyond.

Even their product marketing is distracted by talking about how awesome the building is. How much time has been wasted in the company on the topic?

5
StronglyTyped 11 minutes ago 0 replies      
Where are they gonna go, though? The whole Valley has succumbed to this fad.
6
bjourne 23 minutes ago 1 reply      
> "Judging from the private feedback I've gotten from some Apple employees, I'm 100 per cent certain there's going to be some degree of attrition based on the open floor plans, where good employees are going to choose to leave because they don't want to work there," Gruber said.

But where are they going to go? Is there a single company left that hasn't adopted the open floor plan ideology? If so, are they hiring because I'd like to send in my resume. :/

7
tempodox 41 minutes ago 0 replies      
I guess, once you're big enough, you can afford to have people quit and let the rest work inefficiently, just for the vanity of a design chief.
8
proyb 17 minutes ago 0 replies      
I think its easy to reversed TOP floor for quiet working environment and noisy environment at lower level.

Looking at Singapore has an office look similar to Apple Park spaceship. Directors, professors and councillors are working in open space, I dont see why not?

9
mirekrusin 23 minutes ago 1 reply      
In EU all those glass doors would have to have matt texture/stickers around eye level so you don't enter them by accident - I'm surprised in US there's no such requirement?
10
Shivetya 49 minutes ago 2 replies      
we had a web group to great fan fare had all new work area created for them, open ceilings, glass walled conference rooms, and open-plan in general. I am not sure if anyone like it. from lack of perceived "my space" to simple increase in noise. plus if the computer setups are not well planned out it just looks wrong.

for me, it reminds me of one especially bad contract where we worked on school lunch room tables. five of us in a row facing a wall with one phone at the end. amazing after spending our hourly rate what they thought they could save without realizing the impact on productivity if not turn over.

people deride cubicles now but they are last safe haven, just like dogs and other animals, there is a lot to be said from the subconscious feeling you have from the walls of the cube and single entrance

22
HTTP Immutable Responses ietf.org
141 points by okket  17 hours ago   75 comments top 13
1
edmorley 13 hours ago 2 replies      
For more on the rationale behind this feature, see:

https://www.ietf.org/mail-archive/web/httpbisa/current/msg25...

https://bitsup.blogspot.co.uk/2016/05/cache-control-immutabl...

Rough summary:

> At Facebook, ... we've noticed that despite our nearly infinite expiration dates we see 10-20% of requests (depending on browser) for static resource being conditional revalidation. We believe this happens because UAs perform revalidation of requests if a user refreshes the page.

> A user who refreshes their Facebook page isn't looking for new versions of our _javascript_. Really they want updated content from our site. However UAs refresh all subresoruces of a page when the user refreshes a web page. This is designed to serve cases such as a weather site that says <img src="" ...

> Without an additional header, web sites are unable to control UA's behavior when the user uses the refresh button. UA's are rightfully hesitant in any solution that alters the long standing semantics of the refresh button (for example, not refreshing subresources).

2
DanWaterworth 12 hours ago 0 replies      
I think a better way to handle this would be to use subresource integrity [1]. Then, if the browser's cached version matches the hash, it can be sure that it doesn't need to do any requests.

[1] https://developer.mozilla.org/en-US/docs/Web/Security/

3
manigandham 15 hours ago 2 replies      
It would be far better to have user agents handle caching headers correctly instead of creating another configuration option (which will likely suffer from the same implementation problems).

cache-control: private with either sliding or concrete expiration time already handles this.

4
meandmycode 15 hours ago 2 replies      
What does immutable really mean when you only rent a domain name? I think about this occasionally with domains and email addresses, they've become a trusted piece of information, but over time the ownership of that thing changes, does make me wonder about fraud in the future when sizeable companies die off and their domains free up.
5
lgierth 16 hours ago 0 replies      
Nice to see this has made it through the standards process -- the `immutable` keyword is tremendously useful for systems that store and provide actual immutable data, e.g. content-addressed distributed systems.
6
jlgaddis 11 hours ago 0 replies      
I have a feeling this will end up like HSTS. It sounds really great at first, then a bunch of folks will get burned by it (just wait until somebody accidentally sets it for /index.html or whatever), and finally the general recommendation will be to stop using it altogether ("more harm than good").

Forever is such a long time.

Besides, aren't there already ways to say "cache this resource for <acceptable timeframe>"?

7
carussell 15 hours ago 0 replies      
It would be nice to have strong resource pinning. I've been in contract situations where coordinating with in-house IT for a new server deployment would have been a massive, go-nowhere headache, while throwing a webapp together on my own and self-hosting would've been easy but a big problem wrt company policy on data export. Resource pinning solves this by bringing us into the realm of "auditable". Strong resource pinning would look something like this RFC, except the browser would refuse to accept new resource deployments without deliberate consent of the client. (Something that can be bypassed with a hard refresh, as in this RFC, is not strong enough.)

Other situations I imagine would benefit from this are web crypto and HIPAA compliance.

8
mjs 4 hours ago 1 reply      
Chrome is not going to get this:

https://bugs.chromium.org/p/chromium/issues/detail?id=611416...

Instead, when the user attempts a full-page reload, Chrome will revalidate the resource in the URL bar, but not subresources (they will come from the browser cache, if the cache-control header checks out):

https://blog.chromium.org/2017/01/reload-reloaded-faster-and...

9
glacials 13 hours ago 1 reply      
This is also known as key-based cache expiration, detailed by DHH here: https://signalvnoise.com/posts/3113-how-key-based-cache-expi...

You never worry about when to expire your cache entries if the key changes every time the item does. It's nice to finally see cache-busting coming out of the woods.

10
IncRnd 11 hours ago 1 reply      
Section 3. Security Considerations

"Clients SHOULD ignore the immutable extension from resources that are not part of an authenticated context such as HTTPS. Authenticated resources are less vulnerable to cache poisoning."

This must NOT read SHOULD. It must read MUST! Otherwise, your computer will be subject to an executable planting vulnerability.

I'm surprised they don't have a much larger list of security considerations. There are many other issues that can happen.

11
cpburns2009 16 hours ago 2 replies      
I like the idea of this. This would be very useful for versioned resources (e.g., images, JS, and CSS files) because it would eliminate unnecessary requests for them after they're cached.

The example of a 1 year cache time seems a little extreme, though. I think a month would be better.

12
Lxr 12 hours ago 1 reply      
What happens when wrongly configured servers or frameworks with default settings implement this too aggressively? Browser vendors want their product to work as expected when the user hits refresh, so will be forced to either voliolate the standard or show stale content.
13
masterleep 15 hours ago 4 replies      
It's sad that there's still no good way to do deployment based expiration of assets without horrible hacks like sticking the asset checksum in the URL. I know that none of my assets will ever change unless a deployment occurs, and even then, most of them won't change. HTTP doesn't seem to support this use case well at all.
23
Court OKs child porn prosecution of minors distributing pictures of themselves washingtonpost.com
230 points by ikeboy  13 hours ago   263 comments top 23
1
TazeTSchnitzel 12 hours ago 6 replies      
Because of the potential for selective or punitive prosecution I'm in favour of legalising teen sexting. Obviously the law needs to be carefully drafted to avoid allowing child exploitation, but it should absolutely not be possible to prosecute normal sexually-active teens under child pornography statutes. Trust us, we'd never use it against the people it's supposed to protect isn't enough.
2
thanksgiving 12 hours ago 3 replies      
I'll be the bad guy here and say that these laws should all just go away. It is just too convenient to plant "evidence" be it alcohol, drugs, or this.

The idea that minors cannot consent is correct. Children are not a scaled small version of adults. They need to develop not just grow physically and children are vulnerable. We need to protect vulnerable members of our society. However, like with so many things in life, we've gone too far though. We need to scale back this stupidity be it "for the children" or "terrorism" or "drugs are bad".

There's a quote about doing the same thing and expecting different results which I can't remember off the top of my head but to me all of this: including the drama against marriage and abortion feels like bread and circus to get people occupied with silliness.

3
jfaucett 12 hours ago 11 replies      
What is the rationale behind defining child porn age < 18? I think most would agree there is a profound difference between a 23 year old dating a 17 year old and a 50 year old having anything sexual to do with an 8 year old. There are worlds of moral difference here.

The first case is something that actually happened to a friend of mine. He was 23 and dating a 17 year old, got convicted, sent to prison, and is now labeled a sex offender for life.

Are there any countries out there that take a sane approach to this moral conundrum?

4
will_brown 10 hours ago 0 replies      
When I was in law school I worked on a Florida legislative project to bring forward a version of Romeo & Juliet laws to Florida [1]. For example there are cases where 17 year old girls become registered sex offenders for life due to relationships with their 16 year old boyfriends (whom they have married in some cases). These types of cases have made it to the SCOTUS. Challenging on the grounds of cruel and unusual punishment and double jeopardy, and upheld by the court.

It goes without saying civil and criminal child abuse, neglect, and abondonment cases are as difficult as any to stomach. But there are famous case of parents with "bathtub" framed photos of their kids on their hallway walls and grandfathers with pictures of their grandchilds running through sprinklers, and as you might imagine they were painted as the devil incarnate.

Our law clinic was very aware the issues of text messaging even then, 2006. At the time we used the Socratic method to debate the logic and human aspects of charging the young children guilty of originating "sext messages" of themselves. My position was always that such a law would chill the indemnity a victim needs to come forward.

One of the major issues is that many of these crimes are under the strict liability [2] standard meaning if the pictures exist, the defendant is guilty,notwithstanding their intent.

[1] https://en.m.wikipedia.org/wiki/Statutory_rape#Romeo_and_Jul...

[2] https://en.m.wikipedia.org/wiki/Strict_liability

5
zaroth 12 hours ago 2 replies      
The idea that it is constitutional to charge someone with a felony for taking a picture of their own body is absurd. The judges were just too chicken to strike the entire statute. Actually, my apologies, that's unfair to the chicken.
6
dv_dt 11 hours ago 0 replies      
This is a symptom of how I think Republicans and many neoliberal Democrats have basically reduced our government to systematically stop recognizing compassion as a function. All that is left is possession (property rights), and punishment (criminal enforcement). Charity and compassion are seen as some exotic thing that tend to belong in the personal sphere, not something that is a government function in society.
7
microcolonel 5 hours ago 0 replies      
> He claims any potential harm in his case is just as attenuated and vague as Free Speech Coalition. Because no harm was done, he should have the same right as any adult to take voluntary photographs of his own body. We do not find this argument persuasive.

Given how persuasive this argument is, I am a bit surprised that the court was not persuaded. I understand that Ashcroft v. Free Speech Coalition has some distinguishing factors, like the fact that the depictions in question were clearly distinguishable from records of a crime (on account of them not being photographic images of actual minors).

This is a messy and complicated business, it seems like there needs to be a legislative solution since the supreme court can not find what most people consider a just outcome in an interpretation of the statute.

8
pharrington 9 hours ago 0 replies      
"the Court found the prohibition of images that appear to be, but are not actually, minors improper; any harm stemming from those false images does not necessarily follow from the speech, but depends upon some unquantified potential for subsequent criminal acts."

Washington SC: "He claims any potential harm in his case is just as attenuated and vague as Free Speech Coalition. Because no harm was done, he should have the same right as any adult to take voluntary photographs of his own body. We do not find this argument persuasive."

Am I correct in reading this as the court basically saying "The defendant presents a solid argument. We don't care." ??

9
21 12 hours ago 3 replies      
What other things that you can do to yourself are illegal? Suicide. Abortion in some places. What if I cut a piece of my body and eat it, is that cannibalism?
10
dingo_bat 4 hours ago 2 replies      
Since almost all of us agree that sexual preference is something that is innate, we cannot consciously affect it significantly, isn't it wrong to prosecute pedophiles? IMO, these people should be treated with pity, not as criminals. If we do that, all these absurdities will disappear automatically.

Of course, coercion is still illegal, and coercion of a minor would carry higher criminal penalties. But distribution of content does not automatically imply coercion or violence.

But sentencing somebody to prison for looking at cp is akin to sending gay people to prison.

11
odammit 12 hours ago 1 reply      
Well Snapchat is surely screwed. There goes their second most popular snap after putting cat ears on a fidget spinner.
12
ikeboy 12 hours ago 4 replies      
Should note the sentence here was 30 days + community service.
13
mighty_bander 1 hour ago 0 replies      
But will they be tried as adults? That is the question.
14
c3534l 12 hours ago 1 reply      
The ruling is correct, but the law is stupid.
15
crb002 11 hours ago 0 replies      
Didn't Ron/Rand Paul have a proposed bill to limit punitive action on structural crimes (crimes committed structurally due to them being under age): this, alcohol in possession of minor, curfew violations, ... ?
16
myrandomcomment 8 hours ago 1 reply      
This is 100% mis-read of what this said. The court said "this is the way the law is written, it sucks, but this is the law as written."
17
Chiba-City 9 hours ago 0 replies      
Why "prosecution" and not "investigation." There's a difference. It depends upon an audience. What about self portraits? That's how photographers learn. I had a Canonet and black and white film as a kid. I loved it. Asking questions doesn't have threaten kids. Good kids will answer good questions. Have I missed something?
18
lisper 11 hours ago 0 replies      
19
mirimir 10 hours ago 0 replies      
This was predictable, no? I mean, all those damn preteen cam whores! And wannabe cam whores. And kids, having fun.

So what, in a decade or so, an appreciable percentage of young people will be labeled as child pornographers?

20
ada1981 7 hours ago 0 replies      
I imagine this would also apply if a child took photos of himself and then later, as an adult, sent those photos to someone?
21
quuquuquu 13 hours ago 4 replies      
Everything is a crime in our neo-puritan state.

Congrats, two 17 year olds sending pictures of themselves consentually = child porn.

What I also couldn't understand from the article:

A picture of someone who is an adult, but is /pretending/ to be a minor is also child pornography?

What?!

Pornhub.com will need to take a hard look at some of its content providers then.

Or does it not apply to Montreal based companies?

What if the servers are in the US? What if the client is in the US?

Lunacy.

22
whipoodle 11 hours ago 1 reply      
We are out of our minds.
23
pgnas 10 hours ago 0 replies      
as a parent and also someone who was a child at one time I can see that kids make all kinds of mistakes. I mean, I seriously don't even ask why sometimes because I know the answer I'm going to get will not make any sense.

I'm very lucky and I have some wonderful children I cannot see them going out and doing something like this however people do stupid things. I don't think charging someone with something that is going to stick with them for the rest of their life for something that they did when they were 13 ,14, 17 years old.. they are kids and their brains, hello all you scientists out of there, are not even fully developed.

Listen, if a 17-year-old child sent an explicit image to my daughter I would be over at the parents house faster than you can imagine. That kid obviously needs a swift kick in the ass, he or she does not deserve to be charged with a crime will follow that person for life.

If we continue to charge children as adults, kids that are 13,14,15 or 17, then we might as well change the age of when you are an adult to 13 years old.

has anyone given any consideration to the crap that these kids are bombarded with on a moment by moment basis on television in music and all entertainment, it is hyper sexualized. What do you expect your children to do when they are faced with this?

The sad situation and I don't agree with the stupid thing that this kid did in the stupid things the kids are going to do but they are kids! Come on now, haven't we seen cases recently where adult pedophiles have been let off? Are we going mad?

24
World's largest electric vehicle empa.ch
241 points by prostoalex  16 hours ago   130 comments top 20
1
philipkglass 16 hours ago 17 replies      
Because the vehicle is electric, there is no need to heat up the brakes when descending. This is because the enormous electric engine acts as a generator and recharges the battery pack. That same energy is then used to help the vehicle travel back up the hill. Phys reports, If all goes as planned, the electric dumper truck will even harvest more electricity while traveling downhill than it needs for the ascent. Instead of consuming fossil fuels, it would then feed surplus electricity into the grid.

Clever. It can do this because it travels uphill empty and comes downhill full.

2
sxates 15 hours ago 3 replies      
I've heard people say that one reason why we shouldn't bother with electric cars is that they still generate a lot of carbon from their production supply chain. All the minerals and metals mined, the parts being shipped around, the energy in manufacturing, etc.

But things like this demonstrate why that is the wrong way to look at it. We can, and absolutely should, electrify everything. The whole supply chain.

The truck in the mine, the smelting factory, the assembly line, the warehouse, and the big rig that delivers it to you. There's no reason why every one of these couldn't run on renewable electricity. The only reason they don't is because until recently it was more expensive, but that is no longer true. The total lifecycle carbon impact of everything we make can go to nearly zero as all these points electrify and as our grid migrates over to renewables.

That it can't happen all at once is no reason not to start.

3
ortusdux 16 hours ago 0 replies      
There are ore train cars that use regenerative braking on the way down the mountain, generating as much as 5x the power needed for the empty trip back up. Reportedly, they power the mining town at the top of the mountain.

https://en.wikipedia.org/wiki/Regenerative_brake#Conversion_...

4
pankajdoharey 1 hour ago 1 reply      
This is a classic example of stupid science reporting. For decades now the classic diesel engine locomotive used in trains and the electric trains themselves are the largest electric vehicles of any kind. The Komatsu heavy truck is anything but accurate example of "World's largest electric vehicle".

https://en.wikipedia.org/wiki/Electric_locomotive

https://en.wikipedia.org/wiki/Diesel_locomotive

And by the way all Diesel Submarines and Ships are also electric drive trains.

https://en.wikipedia.org/wiki/Diesel%E2%80%93electric_transm...

5
elihu 14 hours ago 1 reply      
> Nickel manganese cobalt cells are also the choice of the German automobile industry when it comes to the next generation of electric cars, Held said.

Interesting claim. I'm not familiar with NMC batteries.

Here's what wikipedia says:

> Handheld electronics mostly use LIBs based on lithium cobalt oxide (LiCoO2), which offers high energy density, but presents safety risks, especially when damaged. Lithium iron phosphate (LiFePO4), lithium ion manganese oxide battery (LiMn2O4, Li2MnO3, or LMO) and lithium nickel manganese cobalt oxide (LiNiMnCoO2 or NMC) offer lower energy density, but longer lives and less likelihood of unfortunate events in real world use, (eg, fire, explosion, ...). Such batteries are widely used for electric tools, medical equipment, and other roles. NMC in particular is a leading contender for automotive applications. Lithium nickel cobalt aluminum oxide (LiNiCoAlO2 or NCA) and lithium titanate (Li4Ti5O12 or LTO) are specialty designs aimed at particular niche roles. The newer lithiumsulfur batteries promise the highest performance-to-weight ratio.

https://en.wikipedia.org/wiki/Lithium-ion_battery

So, it's another kind of lithium-ion battery.

6
KGIII 6 hours ago 1 reply      
I'm not sure the new/changed title is very good. Those giant earth movers, on tracks, that you see in mines are electric vehicles. They are just plugged in with giant cables.

Well, some are. I presume some run on other power but I've not seen one.

7
hownottowrite 16 hours ago 2 replies      
8
ctz 2 hours ago 1 reply      
This is not close to being the world's largest electric vehicle. Early submarines were electrically powered and more than twice the size (60ft vs ~30ft), eg. the Nautilus of 1886.
9
samcheng 16 hours ago 1 reply      
I'm surprised they didn't use one of the existing diesel-electric hybrid dump trucks as a base model. That seems like an easier conversion than converting a truck with a mechanical transmission.

Here's an example diesel-electric hybrid dump truck: https://en.wikipedia.org/wiki/Liebherr_T_282B

10
jlebrech 35 minutes ago 0 replies      
Isn't the ISS bigger?
11
jordache 12 hours ago 2 replies      
Only 8x? Doesn't seem like that much? Maybe more aligned to the patterns in which the trucks are used.
12
olivermarks 8 hours ago 1 reply      
121 tons max loaded weight in original spechttp://www.ritchiespecs.com/specification?type=con&category=...

Be interested to know what range they anticipate from the 4.5 tons of batteries

13
andrei_says_ 12 hours ago 0 replies      
Which makes me think: wouldn't it be nice if a Tesla or aforementioned dump truck could serve as a battery?

An addition or alternative to a powerwall or whatever the tesla home battery's called?

14
Abishek_Muthian 10 hours ago 0 replies      
It's good that they used existing truck base and focussed only on the fuel. But I couldn't see any data on actual efficiency/mileage, the whole article was focussed on how big the battery was on the big truck. "Its size and strength ensure it can transport materials from a mountain ridge to a valley 20 times per day." Is the only data on it's efficiency ?
16
wayanon 7 hours ago 0 replies      
Thank you to all those contributing insightful comments to this and other HN posts!, its why I visit daily.
17
0xFFFE 15 hours ago 5 replies      
For vehicle of that size to have a controlled descent using only the magnetic field (from the generator) as the braking force, it has to be one massive magnet, no? In which case, it is also adding to the overall weight of the vehicle.
18
basicplus2 4 hours ago 0 replies      
Ignoring one these in the electric version of course...

http://www.hitachiconstruction.com/products/ex8000-6/

https://youtu.be/VI57H89g3PA

19
agjacobson 11 hours ago 0 replies      
I
20
gyrgtyn 12 hours ago 0 replies      
High tech green energy used to more efficiently strip mine/mountain top removal. Yay?
25
A Philosophers Take on Global Basic Income: Fund It by Land Value Taxation knowitwall.com
20 points by akvadrako  2 hours ago   9 comments top 5
1
ctdonath 49 minutes ago 2 replies      
The despicable problem with land taxation is that it turns property from owned to rented. It allows for abuses of raising tax rates to drive people off their property. It replaced true supply-and-demand with arbitrary choices by under-informed bureaucrats, which never goes well. It is the core of gentrification problems, driving people from long-occupied homes via taxation because others pay a premium for neighbouring properties.

Never mind the core problem with UBI itself: funding inactivity by taxing productivity.

People should be able to own land outright. Let the poor and elderly and independent keep theirs, without being driven out by overwhelming taxes raised to fund the poor and elderly.

2
pcnonpc 1 hour ago 0 replies      
Some populations practice having as many children as they can because their belief system calls for it. They believe they will get more brownie points from a supreme being for that.

So these groups will get more share of the basic income in the next generation and more in the next and so on and so on...

3
lwitt51 52 minutes ago 1 reply      
No pitchforks, but this has sentences straight up from wikipediea article: https://en.wikipedia.org/wiki/Land_value_tax
4
codingdave 1 hour ago 1 reply      
I get it. I like it. But man, would that piss off all the farmers in this country.
5
kijin 43 minutes ago 0 replies      
The concept is simple and elegant on paper, but it's vulnerable to the same problem that real-world policies that philosophers come up with often fall prey to. It's going to be incredibly difficult to assess the "unimproved value" of any parcel of land fairly.

Do you apply a flat rate per acre of land regardless of where it is, to the detriment of rural farmers who own many acres? Or do you take the location into account? Are we going to have an army of certified professionals whose job is to assess the unimproved value of each and every parcel of land in the country?

And if your formula makes any reference to the actual price at which land is traded, how do you tell how much of it is the unimproved value and how much of it is the value of all the improvements that went into it? There is hardly any land in populated parts of the world where no improvement has ever been made to it, so there's no reference point. If there are two parcels of land in the same neighborhood but one is more desirable because it has a view of the waterfront, does the waterfront view count as improvement or an intrinsic part of the value of the land? Everything is going to be based on a series of arbitrary decisions. Even the value of a dollar is based on a series of arbitrary decisions.

Even worse, since it's all going to be an arbitrary benchmark with a lot of variables that can be tweaked, everyone will try to influence it or abuse it in a way that minimizes their own tax burden. Over time, these forces will result in lower assessments for all land, reducing the overall tax revenue and making the whole scheme unsustainable. Right now in the developed world, a disproportionately large share of people's wealth is tied up in real estate, especially for middle-class folks who read political philosophy. But this won't last if the incentive structure changes.

I would much rather just bite the bullet and say yes, we're going to charge tax on the fruit of your labor whether Mr. Nozick likes it or not. Oh, and we're going to charge even higher rates on the fruit of non-human labor, i.e. machines, since they have no rights anyway. Maybe if these machines produce enough stuff, who knows, perhaps one day they'll contribute enough so that the tax on humans can be reduced to zero?

       cached 16 September 2017 13:02:01 GMT