hacker news with inline top comments    .. more ..    24 Jul 2016 News
home   ask   best   2 years ago   
The Racket Blog: Racket v6.6 racket-lang.org
24 points by kakashi19  1 hour ago   1 comment top
rayalez 24 minutes ago 0 replies      
John Carmack once said that he is developing VRScript based on racket. Does anyone know if there's any news regarding it's development?
The Uber Engineering Tech Stack, Part I: The Foundation uber.com
187 points by kfish  7 hours ago   65 comments top 14
Animats 5 hours ago 10 replies      
It's interesting that they don't break the problem apart geographically. It's inherent in Uber that you're local. But their infrastructure isn't organized that way. Facebook originally tried to do that, then discovered that, as they grew, friends weren't local. Uber doesn't need to have one giant worldwide system.

Most of their load is presumably positional updates. Uber wants both customers and drivers to keep their app open, reporting position to Master Control. There have to be a lot more of those pings than transactions. Of course, they don't have to do much with the data, although they presumably log it and analyze it to death.

The complicated part of the system has to be matching of drivers and rides. Not much on that yet. Yet that's what has to work well to beat the competition, which is taxi dispatchers with paper maps, phones, and radios.

e1g 6 hours ago 2 replies      
I'd love to know how many people are responsible for devops/operations/app at various stages of any company's journey. Wikipedia says Uber employs 6,500 people so if even 15% of that is on the tech side of the business that's still 1,000+ people allocated to tech. I think this metric would be a useful reality check for a "modern" SaaS project with 3-10 people that's trying to emulate a backend structure similar to the big league.

There are 20+ complex tools listed in the stack, and to run a high-visibility production system would require high level of expertise with most of them. Docker, Cassandra, React, ELK, WebGL are not related in required skills/knowledge at all (as, for example, Go and C are). Is it 5 bright guys and girls managing everything, like the React time within Facebook? Or a team dedicated just to log analytics?

legulere 10 minutes ago 0 replies      
> Screenshots show Ubers rider app in [...] China

Interesting to see Google maps being used, isn't that blocked in mainland China?

NotQuantum 5 hours ago 1 reply      
Uber is really strapped for engineering talent. Especially when it comes for SRE. Myself and many friends working SRE at various Bay Area companies get consistently hit up for free lunches and interviews. It's really weird considering that their stack doesn't NEED to be this complex....
sandGorgon 4 hours ago 1 reply      
What I'm really wondering about is their app. The UI of the app can be impacted without an app update. For example the UI during the pride parade. Or minute of silence ( http://gizmodo.com/uber-makes-riders-take-a-moment-of-silenc... )

I wonder what's the architecture of the app and the API for this.

sixo 6 hours ago 0 replies      
This is just about all the tech there is, right?
50CNT 4 hours ago 0 replies      
So much technology, yet I still had to load the site 3 times and fiddle with uMatrix to get the page to scroll. Now, lots of people do silly things with javascript, but on a blog article on your tech stack it doesn't speak well of things.
marcoperaza 6 hours ago 1 reply      
Quite an intricate architecture. I can't help but wonder if all of the complexity and different moving parts are worth it. Does it really make more sense than throwing more resources at a monolithic web service? Clearly the folks at Uber think it does, and they've obviously thought about the problem more than me, but I'd love to understand the reasoning.
tinganho 4 hours ago 0 replies      
This sounds like a blog post for emphasizing the more buzz word you use the better.
creatine_lizard 7 hours ago 0 replies      
If it is easy, it'd be nice to edit this the title to be not in all caps.
stickfigure 6 hours ago 3 replies      
I've been sitting on the sidelines of the "Uber is great!" vs "Uber are a bunch of dicks!" war for quite some time now. I always figure every company of reasonable size contains elements of both. But geezus... this was posted with the title in all caps "THE UBER ENGINEERING TECH STACK, PART I: THE FOUNDATION". Presumably by someone at Uber. Is their corporate culture that arrogant?
marcoperaza 6 hours ago 3 replies      
mikecke 6 hours ago 1 reply      
For those of you complaining about the title being all caps, it was done so for aesthetic purposes. Which means somehow the submitter went through the time to uppercase each character of the HN title before submitting.

 text-transform: uppercase;

joering2 5 hours ago 0 replies      
Sounds like a very solid foundation! I'm glad to see they have sufficient system in place to continue spamming the heck out of people who never opted into their advertisement in the first place.


I only wish LE would treat CAN-SPAM seriously and put more sources into criminal enforcement.

Why Im Not a Fan of R-Squared johnmyleswhite.com
36 points by sndean  5 hours ago   24 comments top 7
Noseshine 4 hours ago 2 replies      

 > does my model perform worse than the true model?
What is "true model"? I can't make head nor tail of that term. I've never heard this before, nor does it make sense to me when I take just the word meaning.

kgwgk 1 hour ago 0 replies      
I find this very confusing, but I guess I'm not the intended target audience. Not that I say it's wrong, but I don't really see the point.

Do people really expect R^2 to measure the fit of the model to the true model? R^2 measures the fit of the model to the data: i.e. how well does the model perform in predicting the outcomes. In his first example is clear that all the models are equally useless: the noise dominates and the predictive power of the models is close to zero. In the second example the predictive power of all the models has improved, because there is a clear trend. The true model predicts much better than the others now, but each model predicts better than in the previous example.

In the first example, he concludes: "Even though R^2 suggests our model is not very good, E^2 tells us that our model is close to perfect over the range of x."

Actually our model is "better than perfect". The R^2 for the linear model (0.0073) and for the quadratic model (0.0084) is slightly better than for the true model (0.0064). Of course this is not a problem specific to the R^2 measure (the MSE for the linear and quadratic fits is lower than for the true generating function) and can be explained because the linear and quadratic models overfit. E^2 is essentially the ratio the 1-R^2 values (minus one). We get -0.00083 and -0.00193 for the linear and quadratic models respectively (the ratios before substracting one are 0.9992 and 0.9981).

In the second example,"visual inspection makes it clear that the linear model and quadratic models are both systematically inaccurate, but their values of R^2 have gone up substantially: R^2=0.760 for the linear model and R^2=0.997 for the true model. In contrast, E^2=85.582 for the linear model, indicating that this data set provides substantial evidence that the linear model is worse than the true model."

The R^2 already indicates that the linear model (R^2=0.760) and the quadratic model (R^2=0.898) are worse than the true model (R^2=0.997). The fractions of unexplained variance are 0.240, 0.102 and 0.003 respecively and it's clear that the last one performs much better than the others before we take the ratios and substract one to calculate the E^2 values 85.6 and 35.7 for the linear and quadratic models respectively.

(By the way: "well work with an alternative R^2 calculation that ignores corrections for the number of regressors in a model." That's not an alternative R^2, that's the standard R^2. The adjusted R^2 that takes into account the number of regressors is the alternative one.)

graeham 2 hours ago 1 reply      
Interesting article and I find it current for some problems I'm working on at the moment.

I would add a few challanges. The example is a bit a of a strawman - a log(x) function has unique properties that make the Xmax-Xmin vs R^2 work like that. In real data, rarely does a single-variable 'true model' fit as well as the example either.

Context is needed as well - depending on the use of the model, a linear or quadratic fit may be sufficient even for what is clearly a log dataset. The real failing on only for small values of x, maybe 5% of the range of total values. For this case, a bilinear model could fit quite well for the lower 5%, then the existing model for the upper 95%. It depends on the application. I like this phrase:

"When deciding whether a model is useful, a high R2 can be undesirable and a low R2 can be desirable."

Too often statistics are dominated by 'cutoff' values that people apply blindly to all situations.

What do you think of robust regression methods, where obvious outliers are down-weighted?

jostmey 57 minutes ago 0 replies      
Learn about information theory. It is better to calculate the cross-entropy error or KL-divergence between the data and the model.
Rexxar 2 hours ago 0 replies      
That's seem an advantage for R: if R go down when you add more data, you know your model doesn't fit the data any more.
tgb 4 hours ago 3 replies      
Had the author offered an alternative? Namely, can E^2 be calculated in practice?
acbart 3 hours ago 1 reply      
This is not very accessible for people with weak statistical backgrounds.
Benchmarking correctly is hard (and techniques for doing it better) jvns.ca
13 points by ingve  2 hours ago   2 comments top 2
gus_massa 1 hour ago 0 replies      
Another recommendations for informal benchmarks:

It should take between 5 and 10 seconds. (With more than 10 seconds it gets boring.) With very short times, there is a lot of noise and you can confuse the noise with a small signal. You can do benchmarks that are shorter but then you must open a statistics book and read it carefully before reaching conclusions.

Repeat it at lest 5 times. (Preferably in some order like ABABABABAB, not AAAAABBBBB.) With 5 repetitions you can get a rough estimate of the variation, and if the variation is much smaller than the difference then perhaps you can skip the statistic book. Otherwise increase the run time or increase the repetitions and use statistics.

At least once in a while, run the benchmark method against two copies of the same code. Just make two copies of the function and benchmark the difference between them. The difference should be small because the noise will make it non zero. If you can prove that one of the two exact copies is much faster than the other copy, then your benchmarking method is wrong. (This is very instructive, it's much easier to learn about the possibilities of benchmarking noise doing a few experiments than reading all the warnings in the books.)

eatbitseveryday 22 minutes ago 0 replies      
Tim Harris gave a talk[1] earlier this year illustrating the pitfalls of measurements and analysis in systems research.

[1] https://timharris.uk/misc/2016-nicta.pdf

Hard Forks avc.com
125 points by prostoalex  11 hours ago   76 comments top 21
Animats 9 hours ago 1 reply      
If the loss had happened to anyone else other than the people behind Etherium, the fork would never have happened.

A fundamental problem with Etherium is contracts between anonymous parties. If the other party in the DAO hack was discoverable, the hack would not have been a major problem.

Executable, rather than declarative contracts, are probably a bad idea. Putting in a virtual machine is a cop-out - it means you don't know how to solve the problem of expressing contracts formally, and are pushing it off on someone else. That someone else will probably botch it, as the DAO did. I've previously suggested that decision tables [1] would be a better basis for a contract system. This decision table tutorial [2] is something of an ad for a tool, but it uses as an example the sort of things one would want in a contract.

[1] https://en.wikipedia.org/wiki/Decision_table[2] http://reqtest.com/requirements-blog/a-guide-to-using-decisi...

chollida1 10 hours ago 9 replies      
Did Ethereum ever come out and try and create a process to dictate what will happen when something like this happens again?

This case was a perfect storm of:

- very bad publicity with most people's first introduction to Ethereum, the DAO, being hacked

- Ethereum core team members and miners being affected by the hack

- a large hack, monetarily speaking, in relation to the ecosystem.

- the ecosystem still in its infancy so the system is much open to even considering a hard fork option at all

I mean you can't just outlaw programming mistakes. This is going to happen again

What happens next time when the hack is smaller, say 1 million, and doesn't affect anyone on the core Ethereum team or any miners, ie it just affects regular Ethereum investors.

Does everyone vote again with the expected outcome of the miners not worrying about hte little people?

Does it not even get a vote?

Or have they set the precedent that they always roll back hacks now?

I think Ethereum has some good leadership, but I think this is something that they really need to get a policy on now.

What can be done here?

antonios 6 hours ago 1 reply      
The author of the article should bother to read the first two lines of the official Ethereum page:

> Build unstoppable applications

> Ethereum is a decentralized platform that runs smart contracts: applications that run exactly as programmed without any possibility of downtime, censorship, fraud or third party interference.

One should add at the end: "...unless the developers are heavily invested in a contract, in which case they can perform an Ethereum hard fork to take back their investment in case it goes awry. You know, conflict of interest and all that."

Because that is exactly what happened here.

snitko 10 hours ago 2 replies      
Without going into a lengthy discussion, I respectfully disagree with the author: adaptability is NOT more important than immutability. If you don't have immutability, it means there is a potential to avoid consequences for your actions, which, in turn, means the evolution goes in the wrong direction. In fact, even in nature, you cannot evolve back: creatures evolve, try new things, if they don't stick - the branch dies off.

What is being suggested by Ethereum is that you can try an evolution path, see it's not actually working and then rollback - it seems like a good idea at first because you can supposedly iterate fast. But I think Ethereum would lose confidence of investors either way: 1) if you hardfork, you set a precedent and now no one is sure what can be rolled back, when and for what reason 2) if you don't fork - then money is stolen. I seriously see no way out of this. It probably was a good decision to fork, because that at least allowed to save the face and the investment short-term.

niftich 9 hours ago 0 replies      
The threat of hard forks undermines not only confidence in the governance of the cryptocurrency, but also of any application (in the 'make-use-of' sense) of the blockchain.

Specifically, Ethereum was designed with the explicit purpose of running distributed applications that commit their applicaton state into the blockchain. If this state can be subject to a rollback, in the form of forking from a block in the past -- community consensus notwithstanding -- then then you can never rely on the blockchain as storage.

Now, of course, cryptocurrencies only have value if participants believe in them; and these supposed 'smart contracts' of 'state-in-the-blockchain' work the same way. Since we lack the technological means to prevent forks, we need human stewardship to avoid them, and that's clearly not what Ethereum has done.

killbrad 9 hours ago 0 replies      
This is a death kneel to Etherium. Immutability is literally the only thing that made it stand out from any other form of currency. Now it's no better or worse than any other form of currency. Someone can decide they don't like it and reverse a contract? Done.
olh 10 hours ago 0 replies      
> In my mind, adaptability is more important than immutability.

Important for what? Is there adaptability without mutability?

Programmers know that if you implement something with the premise of immutability then if mutability occurs the state of your application changes arbitrarily.

Mutability is good for the ones who can rely on an existing provider of centralized governance to make decisions that benefit them. For everyone else it's just plain corruption.

Ethereum pivoted from decentralized governance to centralized governance across borders. They are now solving different problems and I hope they see it.

randomnames 8 hours ago 0 replies      
If you follow USV a little bit, they are clearly in talks investing in slock.it - probably they already did it at the DAO. So no neutral thinking here. And an example for the general problem of the fork and etherum as crypto currency - people won't trust such insider ruled 'club'
skylan_q 59 minutes ago 0 replies      

If "adaptability" comes down to a popularity contest, this probably isn't a viable/stable/ethical currency.

Gold has had a long history of success and it wasn't able to undergo a hard fork. The fiat currency that we live with now has legitimacy due to the fact that you can generally expect its value tomorrow to be the same as today. Imagine a fiat currency hard-fork and what that would do to the credibility of it?

greenspot 7 hours ago 0 replies      
@Fred, would you mind disclosing all your Blockchain related investments?

On USV's portfolio site I just found Blockstack Labs (2014), Coinbase (2013). Are there more or recently closed ones?

drcode 8 hours ago 0 replies      
I'm really surprised at how negative the comments are here... but at least they are reasonably well informed and respectful.

Oh well, guess I'll just keep trying to build useful stuff on top of ethereum and hope the opinions change (or that this thread is not representative :-)

return0 35 minutes ago 0 replies      
Bitcoiners are libertarian-minded while ethereumists are more democratically-minded.

Same old debate under new disguise.

bonobo3000 5 hours ago 1 reply      
I'm not totally sure how a blockchain works so please correct me if im wrong - for a hard fork to happen in Ethereum or Bitcoin, the majority of users have to "agree" by switching to thew new blockchain right?

In that case, the Ethereum hard fork doesn't sound very controversial - even if the devs decided to hard fork for bad reasons later on, no one would be forced to adopt it. They would be "voting with their feet" by sticking to the old chain.

blhack 6 hours ago 1 reply      
Here's my problem with the hardfork (unless I totally misunderstand it):

I had some ETH stored on the "old" blockchain. I bought them during all the DAO publicity, but never actually bought any part of the DAO contract.

The client, if you are un-ware, is a massive disk space hog. Because of this, I stopped running it.

Now, because of the hardfork, in order to get my value back out of the old chain and into the new one, I have to attach an external disk to my computer, and use it to store the chain.

Can I transfer out of the old and into the new chain indefinitely? I have no idea (maybe I'm just bad at google). Can I even transfer now today (once the client finishes syncing [it has been going literally all day today])? Don't know.

I get the why of the hardfork, and I'm not saying that I'm going to stop supporting ethereum, just venting a little bit.

Turns out new technology is new! And sometimes corner cases like mine haven't been accounted for!

grandalf 9 hours ago 0 replies      
Anyone who works with code will know that bugs happen all the time. Semantic bugs can be hard to anticipate, and are sometimes not obvious even to highly knowledgeable reviewers.

The big fallacy underlying the hard fork is the idea that bugs like this will only happen this one time and so the benefit of stealing the money back and giving it to community early adopters outweighs the costs to the institutional credibility of Ethereum.

In reality, there will be many, many more bugs in smart contracts where the intent of the coder does not completely match the behavior of the smart contract in the real world.

The Ethereum community, and notably its core team who wrote the code for the hard fork while claiming to be neutral in the matter, has sent a strong message that it will meddle in the outcome of contracts in which there was no VM bug.

Human institutions are relatively vulnerable to corruption. There is all sorts of graft, favoritism, etc., throughout most human institutions. Ethereum, because of the concentration of power among early adopters, is still vulnerable to this sort of corruption. We've seen it happen with the hard fork.

Is it a big deal? Well, the invisible hand should have awarded the spoils of the theft to the talented hacker who exploited the contract. Those who lost money in the DAO are people who followed a herd mentality and did not insist that the smart contract they trusted be vetted.

Formal verification will help somewhat, as will the improvement of coding practice. I saw some code written by someone involved in the DAO that was written in a way that made it hard to understand the side-effects of various calls. I'd highlight this if doing code review for a simple e-commerce cart, and it suggests that semantic clarity and readability were ranked low on the list of priorities, favoring a denser style that is much more demanding of the reader's understanding of the subtleties of the language.

What is missing is the simple idea of insurance. Suppose investors in the DAO had been allowed to buy insurance against the DAO malfunctioning... This could have been written as a simple smart contract "future" and could have been offered by anyone. So long as there was demand for both sides of the outcome, a price would have emerged to insure one's investment.

So I think we're on a slippery slope, most notably because of the silly idea that this was the last smart contract bug that will be highly significant or controversial.

Cryptocurrencies bootstrap by appealing to speculators who don't really care about the principle of how it's supposed to work, they just want to buy it early and wait for it to get big (as many did with BTC but many more wished they had). This is fine, but we saw the same sort of greed infect a lot more people, and a hard fork (bailout) occur soon after. You can call it adaptability, you can call it a bailout, it doesn't matter. The bottom line is that at present Ethereum is still vulnerable to it and will be for a while. Let's hope Ethereum grows to the point where a small cabal of people who made a bad investment (or hold a particular political view) can't undermine the system.

HairyGing3r 5 hours ago 0 replies      
While the author is on the right track explaining what has become out of Ethereum, the Ethereum guys raised money to build a system by crowdfunding a trustless global immutable computer. Not a USD public blockchain.
sanxiyn 10 hours ago 0 replies      
> It's a very interesting time in the public blockchain sector right now. Stuff is happening. Lot's of stuff.

I'd like to learn some of these stuffs. Where should I start? What are some stuffs happening?

compil3r 5 hours ago 0 replies      
in a system where the mining business is private and speculative, it might not be alright to hold mining farms/pools morally or legally accountable.
arisAlexis 5 hours ago 0 replies      
In this case adaptability equals centralization
Lazare 10 hours ago 2 replies      
Personal attacks, which this crosses into, are not allowed on HN. Please edit such stuff out of your comments here.

We detached this subthread from https://news.ycombinator.com/item?id=12151921 and marked it off-topic.

jblock 10 hours ago 2 replies      
I have no idea how Etherium or the blockchain works. I'm sure that it's all incredibly important and technically impressive, but these all sound like fake words that your parents would use to describe what happens when they try to update iTunes.
Space Emerging from Quantum Mechanics preposterousuniverse.com
87 points by MichaelAO  9 hours ago   12 comments top 9
yk 3 hours ago 0 replies      
For some context, there is a very intriguing formulation of General Relativity in terms of entropy. [1] What this suggests is, that gravity emerges from a underlying thermodynamical system. In OP the authors construct explicitly one of these systems.

Unfortunately together with the success of perturbative quantum gravity [2], this suggests that quantum gravity is just not experimentally accessible. (That's a thought by Freeman Dyson originally.) The formal argument would be, that a lot of different theories lead to the same thermodynamical limit, so that one can not determine the true underlying theory. Or put it in more hacker news terms, the problem is analogous to trying to learn about TCP/IP by looking at the output of black hole simulations, there is the layer of numerical mathematics in between, which is at least very hard to breach.

[1] Older guestpost by Grant Remmen on Carroll's blog:http://www.preposterousuniverse.com/blog/2016/02/08/guest-po...

[2] For this discussion, it is basically cheating. You quantize gravitational waves on a classical background geometry. The approach works very well because gravity is weak, it also tells us nothing interesting, because one breaks the relationship between gravity and space time (the central feature of General Relativity) by hand.

heimatau 6 hours ago 0 replies      
Tidbits (stars for the top two ideas):

*-* "Or, more accurately but less evocatively, find gravity inside quantum mechanics. Rather than starting with some essentially classical view of gravity and quantizing it, we might imagine starting with a quantum view of reality from the start, and find the ordinary three-dimensional space in which we live somehow emerging from quantum information."

- "If we perturb the state a little bit, how does the emergent geometry change? (Answer: space curves in response to emergent mass/energy, in a way reminiscent of Einsteins equation in general relativity.)

Its that last bit that is most exciting, but also most speculative."

- "But the devil is in the details, and theres a long way to go before we can declare victory."

- "In some sense, were making this proposal a bit more specific, by giving a formula for distance as a function of entanglement. "

- "Were quick to admit that what weve done here is extremely preliminary and conjectural. We dont have a full theory of anything, and even what we do have involves a great deal of speculating and not yet enough rigorous calculating."

*-* "Perhaps the most interesting and provocative feature of what weve done is that we start from an assumption that the degrees of freedom corresponding to any particular region of space are described by a finite-dimensional Hilbert space."

- "A finite-dimensional Hilbert space describes a very different world indeed. In many ways, its a much simpler world one that should be easier to understand. We shall see"

My 1 cent:

I don't know enough about this subject but...it's this creative thinking that is desperately needed in the sciences. I'm not trying to tear down others but instead just say that this is what happens when education advances. When enough (abstract) people focus on a subject, we will find a breakthrough.

Animats 6 hours ago 0 replies      
This is impressive. Not that I understand it. But it's encouraging to try to derive space and gravity from quantum mechanics.

Quantum mechanics seems to be how the universe really works. Outrageous predictions of quantum mechanics, from the two-slit experiment onward, have been experimentally verified. So it's a sound base for further work. Physics has been stuck for a century trying to reconcile relativity and quantum mechanics. This might be a way forward.

It might even lead to something that's experimentally verifiable.

dasil003 5 hours ago 1 reply      
This sounds super interesting to me as someone with a passing interest in astrophysics but no real study. But it also seems such an obvious approach that I'd be shocked if it hadn't been tried before. Is it really that novel?
chmike 4 hours ago 1 reply      
I would suggest reading "Space time quantization" [1] published in 1999 presenting a theory based on the same idea. This theory has since being developped and the latest results have been submitted for publication. These results are multiple. The most important of them are the explanation of the nature and properties of dark matter. But there is much more.

The seeding work on this theory was published as early as 1967 and 1978.

Publication into arXiv is too restricted, because of the endorser requirement, for the article to be published there.

[1] http://www.meessen.net/AMeessen/STQ/STQ.pdf

mattfrommars 44 minutes ago 0 replies      
What's unique about gravity which is different from other forces?
sevenless 3 hours ago 1 reply      
I don't see what's testable or falsifiable in here. Okay, it's good to get rid of epicycles, and intellectually pleasing, but it still looks like the same objections apply as have been raised around string theory. Even if this is the 'theory of everything', will we not end up with shorter equations that describe the same observations?
themgt 6 hours ago 0 replies      
This has seemed to me for a while the right approach (gravity from quantum mechanics, vs. quantized gravity). Can anyone speculate on the discovery of "gravitational waves" - i.e. in a quantum mechanical description, what are gravity waves made of? A sort of quantized spread of entanglement?
JumpCrisscross 2 hours ago 0 replies      
Wait, is this suggesting gravity is really entanglement writ large?
Making your own web debugging proxy twiinsen.com
60 points by ejcx  12 hours ago   14 comments top 7
y0ghur7_xxx 1 hour ago 0 replies      
If you just want a quick proxy to inspect traffic, apache with mod_dumpio always seemed the quickest and easiest way to do it: just proxypass your traffic and

 LogLevel dumpio:trace7 DumpIOInput On DumpIOOutput On
and all your traffic is in the log files.


risyasin 1 hour ago 0 replies      
Hey. About HTTPS proxy, i can offer you a better way, rather than creating your own CA, generating certs for any domain which is too much of work & configuration + compiling OpenSSL. I have done that already, as free service working on this address: https://ca.parasite.io You can easily implement with LUA module to download certs for any domain & download it as Zip or JSON or pfx. Contains all files you need. root, intermediate and target cert with private keys of course. As the owner/developer, that domain and service is going to work for years at least till 2027 (my root cert's expiry date).

Note: Created certs has a 60 mins of cache (nginx) to improve performance. You don't want to download each certificate for all static files in a single request.

stephenr 34 minutes ago 0 replies      
Im all in favour of owning your stack completely, but sometimes you need to be pragmatic for the use-case.

I occasionally need to debug traffic. Not often, but occasionally.

For me, the ~$5 on Cellist (http://cellist.patr0n.us/index.html) was a no-brainer.

patcheudor 9 hours ago 1 reply      
>They all had good features, but none had all of my desired features.

Many intercepting proxies like The Fiddler with FiddlerScript and the Burp Suite through Burp Extender can be extended to have any feature you want by writing your own code or leveraging someone else's. Personally the only time I've found myself thinking I might need nginx for a debugging proxy is when I need scale. I'd rather use something that's close enough, write stuff where I need to, then focus on doing really cool things with them like finding vulnerabilities for fun and profit.

andersonmvd 10 hours ago 2 replies      
You can do that using MiTM Proxy as well, as explained here: https://dadario.com.br/mitming-ssl-tls-connections/
nchelluri 9 hours ago 0 replies      
thanks, this is really neat. i was thinking of something like this and my only idea was to write my own from scratch. while that might be educational it was daunting and i was guessing would have limited support and bugs.
colemickens 6 hours ago 0 replies      
I'm confused. Mitmproxy shows me request/response bodies and lets me edit and replay requests. Those seem like fundamental features (Fiddler, Burp, mitmproxy all seem to have them). I don't see how this is done with nginx reverse_proxying and logging, or is that coming in part 2 or 3 maybe?
Reference Counting: Harder Than It Sounds playingwithpointers.com
27 points by sanjoy_das  8 hours ago   7 comments top 3
akkartik 3 hours ago 2 replies      
This is probably simplistic, but in my safe, toy assembly language for teaching programming I simply avoid ever sharing (always refcounted) pointers between threads. Instead it pervasively uses channels for communication, and while channels are generic anything you put in them gets deep-copied on write. The deep copy is smart enough to preserve cycles, so say if you send in a linked list that contains a cycle the reader will see a linked list with an identical topology. It will just be utterly disjoint with the original linked list.

These design decisions allow me to provide safe pointer access and avoid all race conditions while teaching programming and concurrency, but they probably incur significant performance loss on certain programs. My hope is that the design constraints they impose on the programmer aren't insurmountable. We'll see.

(More info on the project: https://github.com/akkartik/mu#readme. On its memory model: https://news.ycombinator.com/item?id=11855470. On the deep-copy implementation: https://github.com/akkartik/mu/blob/07ab3e3f35/073deep_copy....)

zzzcpan 1 hour ago 0 replies      
I'm thinking, for shared refcounted pointers would it be better to just move synchronization off the critical path completely? I mean operate on local pointers in each thread, like it's a single threaded app, but every hundred decrements merge their counters. And release memory only if counters were zero and synchronized for some time, i.e. for at least a couple of synchronizations on every thread or something. It should be possible to get an order of magnitude better performance, than with any kind of synchronized refcounters.
nwalfield 3 hours ago 0 replies      
One way to reduce the cost of cross CPU synchronization is to use sloppy reference counters:

"An Analysis of Linux Scalability to Many Cores" (https://pdos.csail.mit.edu/papers/linux:osdi10.pdf)

Zapping Their Brains at Home nytimes.com
83 points by prostoalex  14 hours ago   42 comments top 12
intrasight 6 minutes ago 0 replies      
From https://en.wikipedia.org/wiki/Self-experimentation_in_medici...

"A number of distinguished scientists have indulged in self-experimentation, including at least five Nobel laureates; in several cases, the prize was awarded for findings the self-experimentation made possible. Many experiments were dangerous; various people exposed themselves to pathogenic, toxic or radioactive materials. Some self-experimenters, like Jesse Lazear and Daniel Alcides Carrin, died in the course of their research."

maxharris 10 hours ago 4 replies      
There are things worse than people experimenting on themselves. A world where there are such stringent controls that people are caught and/or punished for doing it, or where scientists are do not continue to study and publish for fear of what a few people will do is quite obviously worse.

I am glad that there is at least an attempt here to engage and warn people. (If I were interested in doing this, I would definitely heed the warning myself!)

Having said that, we ultimately have to remember and respect the right to individual self-determination, as well as the incredible value of openness in scientific research.

While I'm not willing to do this to myself, what if these people are onto something, and their experimentation leads to something that enriches our world? (I'm not saying that it's likely, but it is possible.) It's not anyone's place to do much more than warn them, as has been done, and then to let them be.

ekianjo 12 hours ago 1 reply      
It may be relevant to point out that the efficacy of tDCS is far from established so far:


> Methods

> Single-session tDCS data in healthy adults (1850) from every cognitive outcome measure reported by at least two different research groups in the literature was collected. Outcome measures were divided into 4 broad categories: executive function, language, memory, and miscellaneous. To account for the paradigmatic variability in the literature, we undertook a three-tier analysis system; each with less-stringent inclusion criteria than the prior. Standard mean difference values with 95% CIs were generated for included studies and pooled for each analysis.

> Results

> Of the 59 analyses conducted, tDCS was found to not have a significant effect on any regardless of inclusion laxity. This includes no effect on any working memory outcome or language production task.

LukeShu 10 hours ago 0 replies      
The HN discussion a couple of weeks ago on "Neuroscientists' Open Letter To DIY Brain Hackers"[1] (144 comments) might be of interest.

The topic has appeared on HN other times, but never getting >=10 comments.

[1]: https://news.ycombinator.com/item?id=12078895

danielmorozoff 12 hours ago 1 reply      
Tms especially has been looked into heavily by researchers as well as the military over the last 20 years. The results have been remarkable both as treatment but also as a concentration/ skill and learning booster with effects being maintained long after treatment. I believe NPR did a segment on this recently as well.

It's ironic to think that given everything we think we know about how the brain functions , such a gross and simple procedure as wiring a battery to your head or inducing a magnetic field would have such strong and even beneficial effects.


dpflan 12 hours ago 2 replies      
There should be a way for these DIY-ers to share what they're doing with each other and the scientific and medical community. An application that can provide a network of support and communication would be beneficial to these neuro-adventurers (current mental and emotional state, motivations, treatment plans, etc). Self-medication is not a bad idea, but it can require proper guidance and information to be truly effective.
deutronium 13 hours ago 1 reply      
Ben Krasnow has an awesome project using Transcranial Magnetic Stimulation (rather than Transcranial direct-current stimulation)

https://www.youtube.com/watch?v=HUW7dQ92yDU (part1)

https://www.youtube.com/watch?v=B_olmdAQx5s (part2)

kartD 9 hours ago 0 replies      
On a slightly related note, has anyone tried Thync (http://www.thync.com/). It uses tDCS. Wanted to know anyone's experience with the device?
ChuckMcM 13 hours ago 4 replies      
Always a bit scary when people experiment on themselves. I've never understood why people would put their own brain at risk, either through untested drugs or untested therapies.
chrispie 12 hours ago 0 replies      
hmm... feels like there is something missing in this article. nice introduction and then the article ends. more like a recap of past events.Would like to hear some summary/conclusion/insights of the in reach diy "probands".
Billonto 4 hours ago 1 reply      
I wonder if there are any studies that show this type of stimulation would be effective for Dyslexia?
Pokemon GO API github.com
62 points by fmax30  7 hours ago   35 comments top 10
2bitencryption 4 hours ago 6 replies      
So let me get this straight -- for these "unnofficial" APIs, someone just scraped a bunch of packets from their phone while letting Pokemon Go run on it? Then investigated to see what the communication from client to server looks like, then implemented an API that mimicks that communication?

If that's all so, could the PoGo devs simply enforce some type of device authentication to 'shut down' these APIs, or otherwise take different steps to make unofficial APIs less compatible/more difficult/effectively impossible?

spdy 5 hours ago 0 replies      
prayerslayer 2 hours ago 0 replies      
I am just happy to see that the API has "trading" as a concept, looking forward to that feature.

Overall it's sad that most game mechanics of the original games didn't make it into Pokemon Go. Does anyone know how much time they had to implement it?

tfm 1 hour ago 0 replies      
Probably good to regard these first few weeks (months?) of Pokmadness as an "open beta" period, before the security measures get turned on. We can look at Niantic's previous project, Ingress, for a roadmap.

The two major categories of cheatifying in Ingress are falsifying one's location and multi-accounting. There's precious little that can be done about the latter, so Niantic focus on banning players that appear to be "spoofing" their location.

Given the wealth of different devices and playing scenarios, immediate detection of GPS spoofing is infeasible. Things like WiFi router locationing idiocy (or even just dodgy GPS antennae) play havoc with the utopian dream of perfect positioning every time. If a player performs actions seconds apart that are separated by thousands of miles then the game temporarily ignores them, but after some time in the naughty corner they can resume play.

Hardy spoofing detection instead depends on longer-term profiling. Ingress has a similar API to Pokmon Go JSON chunks (rather than protobuf) over HTTPS, most fields out in the open but each request from the app includes a monolithic "clientBlob" containing device characterisation. The format of this has been (presumably) reverse-engineered by a few hardy souls but it is certainly closely-protected Niantic knowledge. We could safely assume that it's a proprietary blend of signal strengths, gyroscope readings, touch events and timings, secret herbs and spices etc.

The clientBlobs lend themselves to offline processing. There are conceivably servers continuously trawling through a backlog looking for tell-tale patterns of bad behaviour, but it also provides an audit trail if a particular player is suspected of spoofing. Occasionally Niantic indulges in mass purges, which presumably follow from a new cheat detection heuristic being run on all the collected data for some period. These "ban waves" have a reputation for penalising unusual device configurations (the most recent major wave appeared to target, amongst other things, players with modified Android variants that might mask GPS falsifying code, including cheaper Chinese knock-offs, and Jolla phones running Sailboat).

Occasionally during major Ingress gaming events so called "XM anomalies" there is some level of human supervision to quickly identify and remedy clearly-fraudulent player behaviour, but for day-to-day operations it seems that account termination, so-called "hard bans" and shorter-lived "soft bans" are entirely automated, and based on offline player data analysis.

Getting back to the New Cruelty: the clientBlob was not part of Ingress's initial implementation; for a while after it was introduced was ignored, and then it became mandatory. A similar opaque chunk of data is included in the Pokmon Go requests, so we should look forward to its imminent deployment when Niantic scrape together enough Pokcoins to buy a few new servers for batch processing. At that time these convenient APIs won't have long to live.

atoko 5 hours ago 1 reply      
All these services contribute to the unstable server situation
kveykva 3 hours ago 3 replies      
Instead of implementing bots and trackers, someone could implement:

 * Just a working 3 step tracker * Gym high scores * Display nicknames of gym pokemon

yelnatz 5 hours ago 0 replies      
Pokemon Go Java API.

The Python version has been out for weeks now and I thought this was the Golang version.

airplane 4 hours ago 2 replies      
I saw in the examples you can catch Pokemon with this API, does that method give you an automatic excellent throw every time then?

Also, does this API depend on running on Android?

airplane 3 hours ago 0 replies      
Does anyone know about the legality of projects like this in the US?

I vaguely remember stories about game companies legally going after companies making bots.

Would uploading an API like this open someone up to a lawsuit? What about someone uploading a bot or a botting framework?

jacquesm 4 hours ago 1 reply      
Could someone please start something called 'Pokemon news'?
Overview of all Amazon AWS APIs aws-api.info
154 points by nl5887  16 hours ago   25 comments top 7
TheDong 11 hours ago 0 replies      
What does this provide compared to the official documentation for each service available on Amazon's website, e.g. https://docs.aws.amazon.com/amazonswf/latest/apireference/AP...

Each one is available from https://aws.amazon.com/documentation/ -> click service -> click "Api Reference"

If all you've saved is one click, I don't think it's worth it, so what else does this do?

thisismyhnuser 12 hours ago 4 replies      
I really wish someone would re-write all of AWS documentation and make everything simpler to understand. I'd like to use AWS but the documentation as-is would take hours to read.
_ao789 15 hours ago 1 reply      
Very nice. It would be cool if you could try out api calls from the examples (ala swagger style)
karavelov 9 hours ago 0 replies      
The whole AWS APIs are formalized and exposed in different ways and that formalism is one of it's strengths: everybody can write a transformation that creates a bindings for his language of choice. For example of this take a look at:


P.S. Agree that not all of the services adhere to the same bar.

dkarapetyan 12 hours ago 2 replies      
Keeps redirecting to S3 APIs for me. Is that intended?

One thing people should know about AWS and AWS APIs in general is that it is a ghetto. The ad-hoc nature of most AWS services and their weird interactions is indicative of generally bad design. Even with the AWS Ruby SDK I can barely get anything done without consulting 5 different references about which parameters are required, which are optional, and what the sequence of various calls is supposed to be to get an intended result.

So even though this is useful a cookbook would have been much more useful.

posnet 11 hours ago 0 replies      
Which one of the AWS SDKs is this based on?

Or did you actually scrape the documentation web pages?

Tillie95 6 hours ago 0 replies      
With a new tool, spreadsheet users can construct custom database interfaces mit.edu
40 points by renafowler  10 hours ago   23 comments top 9
kfk 2 hours ago 1 reply      
OK... but where are decent user-friendly tools to manipulate, manage and share data and insights? Really, the problem is that to this day data in Corporations is a mess. The moment you step out of the standard warehouse systems you find yourself navigating a mess of spreadsheets, word docs, power points, all showing sometimes the same data in different formats. You duplicate data every time you send an Excel file by email... this data needs to be in sync and when not people get screamed at or fired. That means a lot of work goes into manually maintaining thousands of different files to make sure they tie to the overall picture. It's a mess. The bad part is that nobody is looking into this, everybody is focusing on analytics or reporting, which by now are the easy part.

Then you have ancient tools like HFM or BPC that try to "enhance" spreadsheets and spreadsheets consolidations. They don't work. They are barely reliable and they cost a lot of money for no reason (they are simple SQL databases, they can't even be compared to much more complex software like salesforce or similar).

Then Corporations are now being sold this all "big data" thing, which is old by now for tech audiences, but it's just new for most of the big companies nowadays. Unfortunately, while big data has a lot of potential, it further moves away investments on good old small data where probably there is much more ROI to grab simply because solving this problem is not that difficult and not that expensive if you want to actually fix it.

Then now everybody wants to do Predictive, but Predictive won't beat saving hundreds of employees thousands of working hours by improving the efficiency of how we handle data and data insights. You can literally create a new workforce out of the many hours you would save with better efficiency in this area. Without even considering that on top of efficiency, you get much better, more fact based, decisions driven by the overall increase in transparency which you lose when insights are spread around thousands and thousands of files.

There is some light at the end of the tunnel. Software solutions like Tableau go in the right direction, but they do not provide the much (more) needed tools to properly manipulate and manage data and, especially, consolidations and integration. The only way to get out of this mess is to control the data flow but especially to have 1 and only 1 data flow. That means once your insights are approved and locked, every other view should read this data, there should be no manual intervention any more. If the locked data is wrong, it will be wrong for everybody, which is a good thing compared to having multiple copies of the same and then people fighting on which one is the right one.

dhruvkar 6 hours ago 0 replies      

I haven't used this in production, but messed around with it for a while. Seems to do what this article is suggesting.

fspeech 7 hours ago 0 replies      
Very cool. The right side of the article provides links to a video presentation of the tool and a SIGMOD paper reporting on the details. It is specifically compared against Access in a study on usability.

Link to the project page: http://people.csail.mit.edu/ebakke/sieuferd/

philprx 5 hours ago 0 replies      
Am i missing something but... ... Where is the tool ?

Source code or it doesn't exist ;)

alirobe 6 hours ago 1 reply      
So... Pivot Tables? Excel has done this for ages. Excel 2016 even has in-memory BI.
wx196 4 hours ago 1 reply      
"when you have something extremely industry-specific is, you have to hire a programmer who spends about a year of work to build a user interface for your particular domain" with tools like Oracle APEX it can be done much faster, so I think nothing new.
ww520 4 hours ago 1 reply      
Microsoft Access?
zyxley 7 hours ago 4 replies      
So... they've reinvented Access?
Marionetic 6 hours ago 0 replies      
really cool, easy start to any project
Stealing Bitcoin with Math speakerdeck.com
39 points by marksamman  4 hours ago   3 comments top 2
dcousens 45 minutes ago 0 replies      
That point when you realise you're responsible for the data in someones presentation.
How we broke PHP, hacked Pornhub and earned $20k evonide.com
284 points by KngFant  23 hours ago   84 comments top 11
krapp 19 hours ago 4 replies      
The takeway:

 You should never use user input on unserialize. Assuming that using an up-to-date PHP version is enough to protect unserialize in such scenarios is a bad idea. Avoid it or use less complex serialization methods like JSON.

danso 20 hours ago 2 replies      
OT: Is there a site that curates these kinds of interestingly detailed hacks? Like Dan Luu does for debugging stories? (https://github.com/danluu/debugging-stories)
ckdarby 15 hours ago 2 replies      
That moment when the company you work at is on the front page of Hacker News xD
aprdm 9 hours ago 1 reply      
Really good write up. Some people are really smart, I wouldn't ever be able to do that kind of stuff even after being programming for years.
watbe 20 hours ago 0 replies      
This is an elaborate hack and a very detailed writeup. Thanks for sharing.
ndesaulniers 18 hours ago 1 reply      
> Using a locally compiled version of PHP we scanned for good candidates for stack pivoting gadgets

Surprised that worked. Guess they got lucky and either got the comiler+optization flags the same as the PHP binary used, or the release process can create higly similar builds.

tjallingt 14 hours ago 2 replies      
I have some questions about two things in the exploit code that puzzled me:

 my $php_code = 'eval(\' header("X-Accel-Buffering: no"); header("Content-Encoding: none"); header("Connection: close"); error_reporting(0); echo file_get_contents("/etc/passwd"); ob_end_flush(); ob_flush(); flush(); \');';
1. they seem to be using php to code the exploit (solely based on the $ before the variable name) but i've never seen the 'my' keyword before, what exactly is this language?

2. if i understand the exploit correctly they got remote code execution by finding the pointer to 'zend_eval_string' and then feeding the above code into it. doesn't that mean the use of 'eval' in the code that is being executed is unnecessary?

Phithagoras 20 hours ago 3 replies      
Appears to be experiencing the hug of death. May be quite slow
cloudjacker 20 hours ago 4 replies      

From a legal perspective how do companies and hackerone create a binding exemption from laws used to prosecute hackers?

fencepost 17 hours ago 1 reply      
So does Pornhub's bug bounty program include some number of years of free paid membership along with financial bounties? Kind of a "treat us right and we'll let you treat yourself right" kind of thing?
given 4 hours ago 0 replies      
Too bad they didn't just go ahead and:

> Dump the complete database of pornhub.com including all sensitive user information.

And of course leak the data to expose everyone that participates in this nasty business. It is such a sad thing that people are even proud to work at companies like this where humans are not worth more than a big dick or boobs.

And then you get around and say that child porn is so horrible. No, all porn is horrible and destroys our families and integrity. How can there be any dignity left if these things are held to be something good?

Journalists confused an opinion piece for an alcohol-cancer study arstechnica.com
178 points by legodt  19 hours ago   83 comments top 14
nchelluri 16 hours ago 5 replies      
I think modern journalism is like application development at a startup, except with even less QA. "Get it out!!! Get it out!!!" And since people will forget about the article in a few days anyway, as long as nobody who knows better and knows how to make the right noise about it and has enough energy, time, and incentive to actually do so reads it and starts the correction ball rolling, it's just, kinda... there. Archived to be read by no one but Google and someone doing a long tail search some years to come, hopefully with no more important purpose than writing a paper for school that again, will be disposed of, this time more thoroughly.

We just create so much crap for consumption. Even public broadcasting is full of the 24-72hr news cycle. I've written code that I was employed for where I was like, why are we doing this. I assume journalists are the same.

elgabogringo 18 hours ago 4 replies      
Science and economic journalism are both pretty bad. My favorite thing (ok, one of my favorite things) about blogging/internet is that I can read real-time thoughts/opinions from actual scientists and economists now instead of having to rely on professional writers and media.
1ris 16 hours ago 4 replies      
I'd like to know how my drinking behaviour affects my mortality. And how that effect is compared to the consumption of tobacco, marijuana, participating in road traffic, jogging and overwight.

While it's clear that alcohol is not particular healthy, I feel the risk is negligible compared to other common behaviours. I like alcohol very much and I'd like to make a informed decision about it. But i dearly miss the curial, end-user friendly information.

Hello71 14 hours ago 0 replies      
Amusingly, this article misspells the name of the author of the referenced article several times as "Conner" instead of "Connor", implying that the writers were in a rush to get it out the door.
percept 17 hours ago 0 replies      
"IARC list ethanol in alcoholic beverages as Group 1 carcinogens and arguments "There is sufficient evidence for the carcinogenicity of acetaldehyde (the major metabolite of ethanol) in experimental animals.""

(Wikipedia, citing http://monographs.iarc.fr/ENG/Classification/Classifications...)

dang 18 hours ago 0 replies      
The 'study' was discussed yesterday at https://news.ycombinator.com/item?id=12142140, but this is more a media story, so we can treat it as a separate topic.
jasonjei 17 hours ago 2 replies      
What I find interesting (perhaps confusing is the better word for me) is that the other article from a few days ago that proclaimed drinking leads to cancer didn't mention that the moderate drinkers have fewer risk factors than the control group of abstainers. (The OP article does indicate the result, however.)

So what is it? Is moderate drinking helping? Or is it the lifestyle of moderation helping?

From the National Cancer Institute: "Can drinking red wine help prevent cancer? Researchers conducting studies using purified proteins, human cells, and laboratory animals have found that certain substances in red wine, such as resveratrol, have anticancer properties (16)."[0]

Meanwhile, the same National Cancer Institute source writes that "[b]ased on extensive reviews of research studies, there is a strong scientific consensus of an association between alcohol drinking and several types of cancer (1, 2)."[0]

Drinking causes cancer but red wine is known to have anticancer properties? Abstainers in one study have higher risk factors than moderate drinkers?

[0] http://www.cancer.gov/about-cancer/causes-prevention/risk/al...

sigdoubt 11 hours ago 2 replies      
Was anyone able to parse this paragraph? I keep getting stuck on the contradiction. Which is mildly hilarious in an article about articles being misinterpreted.

> She goes on, however, to knock back links suggesting that drinking may lower a person's risks of cardiovascular disease (CVD), noting that people who drink moderately also tend to have other lifestyle factors that lower their disease risk. Or, put another way, she noted that in a large US survey in 2005, 27 of 30 CVD risk factors were shown to be more prevalent in abstainers than moderate drinkers.

thefastlane 15 hours ago 0 replies      
best not to rely on the media to provide us executive summaries of academic papers. just go read the paper itself (DOI is 10.1111/add.13477). it's a very useful read for getting up to speed on where we are in terms of understanding cancer and alcohol.
dahart 16 hours ago 0 replies      
> While these errors may appear minor to some, confusing an opinion piece with research is likely to seem disturbing, if not egregious, to those in the scientific community.

This is far from a new problem, and this particular piece is far from egregious, relatively speaking, considering how bad public science reporting is in general in the mass media.

John Oliver had fun with it recently:https://www.youtube.com/watch?v=0Rnq1NpHdmw

nezt 6 hours ago 0 replies      
Here's a study correlating ethanol usage with a lower risk of lymphoma: http://www.ncbi.nlm.nih.gov/pubmed/22465910

Here's one correlating ethanol usage with a lower risk of kidney cancer:http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3049576/

Here's a study linking ethanol consumption with a reduced risk of ALS:http://www.ncbi.nlm.nih.gov/pubmed/22791740

And it's not just one study linking ethanol to a lower risk of CVD-- it's several. The story is the same for all cause mortality. Acetaldehyde doesn't fully explain elevated risks of throat and mouth cancer, in my opinion... Acetaldehyde is a downstream metabolite, whereas the epithelium of the mouth and throat are tissues that ethanol is clearly coming in direct contact with. Again, this is just a crude hypothesis.

Sure ethanol is a toxin, but attempting to avoid it in an attempt to avoid toxins or carcinogens is a fantasy. Carcinogens are everywhere -- you breath them, eat them, ingest them, absorb them constantly. This is why low/moderate exposure to sunlight, alcohol, certain phytochemicals might actually be 'hormetic'.

I'm not saying that in an era of biotechnology and whole genome sequencing, ethanol consumption will be optimal. When we reach that point, we will most likely be consuming some kind of nutrient gel that contains everything the body needs. We will likely inhabit carcinogen free environments. Until that point, and I say this to all my fellow autistic nerds and hacker news readers, it's probably better to go have a drink or two with that cute girl who sits a few cubes down. If you want to extend life, invest/educate yourself on emerging biotechnologies. Otherwise you'll need to start worrying about the carcinogenic materials your electronics occasionally off-gas. Or the PCBs in your wild caught salmon. Or the benzaldehyde. Or the arsenic in your brown rice. Or the pesticide residues in your clothing. Or the ... nevermind.

wang_li 16 hours ago 3 replies      
Why should we hold journalists to a higher standard than science journals? It was only a week and a half ago that JAMA did the same thing: http://jama.jamanetwork.com/article.aspx?articleid=2533698
known 7 hours ago 0 replies      
What does alcohol do to your body and brain?http://qz.com/696693/what-does-alcohol-actually-do-to-your-b...
kevin_thibedeau 18 hours ago 2 replies      
Doesn't greater alcohol consumption correlate with tobacco use and other adverse behaviors? It think they're making quite a stretch to say that the cause is the alcohol.
Water Out of the Tailpipe: A New Class of Electric Car Gains Traction nytimes.com
28 points by prostoalex  13 hours ago   39 comments top 9
_ph_ 5 hours ago 2 replies      
The big piece left out of the discussion about hydrogen often is the energy costs for producing hydrogen. It is either produced by extracting hydrogen from natural gas which produces CO2, or by electrolysis from water. In both cases, the hydrogen has to be compressed up to 700 bars for transportation. In the hydrogen car, the fuel cell produces electricity to drive the car. The net result is, that the hydrogen car has perhaps a 30% efficiency of the electricity reaching the motor. A battery powered electrical car has about 90%.

The bottom line is, for the same amount of primary energy, a pure electrical car gets 2-3x the range of a hydrogen car. And of course, that directly translates into the costs of driving.

The article also gives some off number about electrical cars: they are not limited to 200 miles of driving - a Tesla does up to 300, and recharging at a supercharger station takes about 30-45 minutes. And, of course, in contrast to a hydrogen car, an electrical car can be recharged over night at home, so most trips do not require any recharging at all.

JumpCrisscross 2 hours ago 5 replies      
> Battery electric vehicles are still limited to a maximum of about 200 miles of driving before a recharge is required, and charging up can take time four hours or more in some cases. Batteries are also heavy, which presents challenges for powering larger vehicles like trucks or SUVs.

I had not considered the weight disadvantage of Li-on. Perhaps we'll have two technologies side by side: batteries for smaller, more centrally-located vehicles and hydrogen for larger and more remote ones.

The energy-intensity of enriching, packaging and distributing hydrogen looks less daunting in a duck-curve [1] characterised solar future. One could simply generate hydrogen when rates drop below a threshold.

[1] https://www.caiso.com/Documents/FlexibleResourcesHelpRenewab...

analog31 9 hours ago 1 reply      
Have they solved the problem of where hydrogen comes from? Last time I looked this up, the main process for making hydrogen is the shift reaction, which is hardly carbon neutral.
zokier 1 hour ago 0 replies      
One of the issues with hydrogen power that you can't use regenerative braking as easily. Of course you could run a hydrogen/battery hybrid system, but its not obvious that hydrogen would really be that beneficial in that configuration.
samcheng 4 hours ago 2 replies      
Once you start filling your electric car every night from a plug in your garage, the idea of (weekly?) driving to one of a handful of special hydrogen fueling stations is seriously unattractive. In fact, filling up the old gasoline car becomes annoying.

This article definitely reads like a PR piece from an oil company (which is, after all, where the hydrogen comes from). The New York Times has a history of biased journalism against electric cars...

jessaustin 1 hour ago 0 replies      
Mr. Manning now has enough fueling options in Southern California to cover his 45-mile commute to Playa Vista with little anxiety.


dredmorbius 5 hours ago 0 replies      
The two Achilles heels of fuel-cell automobiles have been the cost of the fuel cell itself, and sourcing the hydrogen needed to fuel it.

Fuel cells typically use a lot of highly-expensive catalyst, with full costs on the order of a million dollars each. Odds are good the sales price of these vehicles (just under $60k) represents a small fraction of the manufactuer's actual costs. This is a test pilot for real-world experience.

Hydrogen is the other problem, both in sourcing it and in distributing and dispensing it. Hydrogen is an energy carrier, not an energy source[1], but an energy carrier. You need some other source of energy to provide hydrogen, usually via hydrolysis, or from some hydrogen feedstock, usually natural gas.

In the case of hydrolysis, your problem is the energy cost of electrolysis, which costs you about 40% of the input energy. The bad news is you get much less hydrogen energy out than electrical energy in, the good news is that you can store the hydrogen, while electricity doesn't bank well.

In the case of natural gas, you've got the sitation that your vehicle is still ultimately consuming fossil fuels, though methane (CH4) emits far less CO2 than petrol (about a C8H18) -- roughly half as much. Since you're pre-processing the methane into hydrogen, there's the option of sequestering the carbon for other uses.

Hydrogen also has extensive problems with storage and handling -- it doesn't like to be contained, will literally leak out between the atoms of containers, embrittles metals, etc., etc.

Another alternative, one I'm interested in, though 50 years of serious research[2] has yet to result in a working large-scale prototype, is Fischer-Tropsch fuel synthesis. Effectively it creates hydrocarbon fuels using electricity. The most promising model I'm aware of sources both hydrogen and carbon from seawater, hence seawater-based Fischer-Tropsch fuel synthesis. Penciling out the studies I'm aware of, it actually could scale up to current US and forseable global levels of production, without literally paving the world with solar panels and/or synthesis plants[3]. E.g., not patentently impossible. The fact that the research hasn't proceeded further makes me question its ultimate practicality.

If, however, it is possible, then we end up with a fuel that is an exact chemical analog of existing hydrocarbon fuels, is infinitely miscable with them in the fuel processing, dispensing, and utilisation chain, and is carbon neutral.

And if you're generating hydrogen, you're already about 90% of the way to creating synthetic hydrocarbon fuels which avoid most or all of hydrogen's storage, transport, dispensing, handling, embrittlement, and energy conversion issues.

As I mentioned, I've looked into this in several posts, you might want to start with the historical overview here:




1. Technically, so are petroleum, coal, and natural gas. Though the source energy was supplied hundreds of millions of years ago, on average, in the form of sunlight converted to plant matter. Given this, at the rate of roughly 5 million years of ancient primary production per year of current consumption, you could make a reasonable argument that the fully realised solar energy cost (what's called "emergy") of petroleum is about 5 million per single unit of energy delivered. At the very least, we're spending 5 million years of accumulation per year. Something you might want to reflect on.

2. Brookhaven National Lab, M.I.T., and the US Naval Research Lab. Generally serious outfits.

3. For a counterexample, see schemes for biofuels. At best, present US fuel consumption would require plausibility-stretching levels of development, if not quite literally multiples of total US landmass. E.g., quite patently and evidently batshit impossible.

sdkjfwiluf 8 hours ago 1 reply      
Hydrogen and hydrocarbon fuels are just sources of electrons. The beauty of a real electric car is the efficiency of going directly from stored electrons to power.
IanDrake 10 hours ago 3 replies      
I thought hydrogen had been debunked by now.

Tesla had a great 'well to wheel' efficiency analysis post on their blog a few years back that killed any notion of a hydrogen economy.

Boltzmann Brain wikipedia.org
25 points by mmrichter  5 hours ago   6 comments top 5
chriswarbo 1 hour ago 0 replies      
Boltzmann's fluctuations are pretty easily defeated by Feynman's argument: small fluctuations are exponentially more likely than large fluctuations, so if we make a new observation (e.g. opening a door) the fluctuation hypothesis predicts that we'll see random noise; yet we don't, we see more ordered structure. The hypothesis is refuted.

What I find more interesting is if we're a random fluctuation on a Turing machine tape. "Randomness" takes a lot of space to encode in a program, so smaller (more likely) programs lead to less random, more structured outputs. This agrees with our observations.

RobertoG 1 hour ago 0 replies      
'Permutation City' by Greg Egan, a Science Fiction novel, has as one of its main subject a variation of this idea.

Recommended, it's a very interesting reading.

vog 2 hours ago 0 replies      
From the article:

> A Boltzmann brain is a hypothesized self aware entity which arises due to random fluctuations out of a state of chaos.

This is really strange. Years ago I wrote a short story about a similar topic, without ever having read about the Boltzmann brain:

"History of Everything"


Eerie 3 hours ago 1 reply      
You know what`s funny? Our entire universe could be a Boltzmann Universe, that spontaneously appeared just three minutes ago.

Happy Birthday, everyone!

drjesusphd 1 hour ago 0 replies      
After reading this, The Last Question by Azamov takes on new meaning.


Wire open-sourced github.com
451 points by arunc  1 day ago   107 comments top 26
gfosco 1 day ago 3 replies      
A link to a GitHub organization isn't great.. I'd say this is better: https://medium.com/@wireapp/you-can-now-build-your-own-wire-... but even that doesn't clearly explain what Wire is. Visit https://wire.com to find out it's an encrypted video and group chat app.
grizzles 1 day ago 1 reply      
A bold move by Wire. Open source is still a very disruptive play, and the world needs something like this. If they manage this well and triple down on developer engagement, it could work out quite nicely for them. EDIT: Thread title is slightly misleading. It looks like they did a Telegram. There is no server here.
deltaprotocol 1 day ago 6 replies      
I must say that my first impression is beyond positive.

One to one and group chats, group video and audio calls, GIF search built-in, doodles, the best implementation of photos in the message stream that I've seen, poking and playable Spotify and Soundcloud music by just sharing links? All with end-to-end encryption?

I have that "too good to be true" feeling but, still impressed. Just waiting for possible audits and more feedback from the security community.

Edit: It's also Switzerland based, already supports Win10, MacOS, Web, Android and iOS, and to complete has the cleanest design I've seen in a messaging app.

laksjd 23 hours ago 0 replies      
They offer a password reset function. How does that work? Do they hold my private key in escrow? I'd certainly hope not! Or does the password reset work by creating a new keypair? If so, does this at least generate WhatsApp style security warnings for people chatting with me?

With some digging I've found a way to verify key fingerprints so that's nice, but it's manual, not QR assisted :(

saghul 1 day ago 0 replies      
Lots of good stuff in there, thanks Wire! I just wish they had gone with something other than GPLv3 for libraries, like LGPL. Looks like they changed them on December, from MPL 2.0 to GPLv3.

At any rate, there are lots of us who can use the code with that license :-)

melle 1 day ago 1 reply      
I believe all their good intentions and I do hope they succeed. But for me it's too early to tell whether their business model will hold. If they build up a sufficiently large user base, but fail to monetize it and sell the company to e.g. Microsoft or Facebook, then I doubt how much of their original privacy / openness remains.

Another thing that I wonder about: Does being Swiss-based give them a privacy advantage?

jacek 23 hours ago 0 replies      
I am a user. I switched myself and my family from Skype a few months ago and it has been great so far. Quality of video and audio is great, Android app works very well (better than web based desktop versions). And it also works in a browser, which is great for me (Linux user).
nanch 1 day ago 1 reply      
See https://wire.com for more information since the linked repos provide no context. "Crystal clear voice, video and group chats. No advertising. Your data, always encrypted."
prayerslayer 16 hours ago 0 replies      
Not sure if these are for realsies, but there are some API keys in the webapp repository:



mrmondo 6 hours ago 0 replies      
Sorry if I've missed it somewhere but I'm looking for some independent, transparent reports on its security implementation. I was wondering if anyone could help me with finding this - or if perhaps they haven't been done I guess that would answer my question?
jalami 1 day ago 1 reply      
Side note, but it's kind of strange that images on their site require cookies enabled to view. I didn't dig into a reason, I just white-list the sites I want to use cookies and found it odd that there were big white spaces before doing so.
happyslobro 23 hours ago 2 replies      
I found a file that is available as either MIT or GPL. Or is it only available under a union of the terms of both licenses? An intersection? Who knows, IANAL. https://github.com/wireapp/wire-webapp/blob/0cf9bf4/aws/main...

Why do people copy the license all over the place like that?

mahyarm 1 day ago 3 replies      
Now all this needs is a few good third party audits, verifiable builds and it's the holy grail of encrypted communications!
_bojan 1 day ago 1 reply      
Didn't see that coming. I think Wire is struggling to get new users and this move could put them on the map.
20andup 1 day ago 1 reply      
I wonder what the business model is?
redthrow 18 hours ago 1 reply      
Why does this Android app require a phone number to sign up?

At least Hangouts lets me use the app without a phone number.

sanjeetsuhag 1 day ago 2 replies      
Can anyone explain to me why they use an UpsideDownTableViewController ?
pedalpete 1 day ago 3 replies      
I don't get how they can make statements like this "Only Wire offers fully encrypted calls, video and group chats available on all your devices". Webrtc is encrypted by default.
stemuk 1 day ago 3 replies      
I wonder how they encrypted their chat on the web client. Scince the Signal protocol is kind of the gold standard right now, probably their solution might in the end be the better one.
07 4 hours ago 0 replies      
Hmm, seems interesting.
aleken 1 day ago 1 reply      
Otto is my new best friend. I cannot see any information about a bot API on their site though...
mei0Iesh 1 day ago 0 replies      
Thank you! Wire is the best, with multiple device support, clean mobile app, and a desktop client. It'd be nice if it were a standard open protocol so everyone could implement it, and find a way to allow federation. I'd pay to help support.
maxpert 21 hours ago 0 replies      
Good to see people using Rust in production :)
mtgx 18 hours ago 1 reply      
I've been asking for three things from Signal for the past almost two years:

1) desktop app

2) video call support

3) self-deleting messages

Signal finally (sort of) delivered a desktop app, but it still doesn't have the other two. Wire has the first two, but it's still lacking the last one. I hope one of them will have all three of these features soon.

arthurk 1 day ago 2 replies      
Is there a way to download the OSX app without the Mac App Store?
vasili111 23 hours ago 1 reply      
Where is Windows client source code?
Category Theory for the Sciences mit.edu
186 points by 0xmohit  21 hours ago   67 comments top 8
dkarapetyan 16 hours ago 3 replies      
This comes up every so often and people argue over the merits of learning category theory or not or how useful or abstract it is. The answer is it is very abstract and you shouldn't learn it expecting great dividends in your day to day work. Like with most mathematics you should learn it because it will change the way you think and solve problems. I definitely don't use calculus in my day to day work but the idea of infinitesimal quantities, continuity, and linear approximations have more than once helped me come up with an approximation to a problem that has made the solution tractable.

If you are expecting to apply something immediately to your work then combinatorics and probability theory are way more relevant and practical for day to day programming work.

yomritoyj 7 hours ago 3 replies      
Category theory has served mathematicians well, but only because they had a great mass of concrete detail which needed to be abstracted away.

An abstract framework like category theory can actually be harmful where there isn't that much concrete detail to begin with. A personal example I faced was being overwhelmed by category theory jargon when starting to learn Haskell a couple of years back. My confidence returned only when I realized that there was just one category in play with types as the objects and functions as the arrows. The jargon was unnecessary. Today Haskellers discuss so many interesting issues about the language and its implementation which do not fit into the category theoretic framework at all.

Phithagoras 20 hours ago 0 replies      
harveywi 19 hours ago 1 reply      
Does anyone here have any experience using the Ologs proposed in the book for doing knowledge modeling or other things? Some anecdotes, or in lieu of that, opinions of this formulation compared to other knowledge model representations would be interesting.
cousin_it 4 hours ago 1 reply      
So. Just to clear up which math is good for a programmer to learn.

All computer science is divided into Theory A (algorithms and complexity) and Theory B (logic and programming language design). Applications of category theory are part of Theory B. If you're a programmer who wants to have real world impact, you should study tons of Theory A and completely ignore Theory B.

That sounds inflammatory, but it is unfortunately 100% true. All of Theory B combined has less impact than a single hacker using Theory A to write Git or BitTorrent.

bnj 19 hours ago 6 replies      
Just on a formatting note, it would be great to have a static generator that takes markdown input and generates this kind of html book with footnotes and references. Similar to [spec-md](http://leebyron.com/spec-md/). Anyone know of interesting projects along those lines?
CarolineW 20 hours ago 5 replies      
Particularly of note:

 1.3 What is requested from the student The only way to learn mathematics is by doing exercises. One does not get fit by merely looking at a treadmill or become a chef by merely reading cookbooks, and one does not learn math by watching someone else do it. There are about 300 exercises in this book. Some of them have solutions in the text, others have solutions that can only be accessed by professors teaching the class. A good student can also make up his own exercises or simply play around with the material. This book often uses databases as an entry to category theory. If one wishes to explore categorical database software, FQL (functorial query language) is a great place to start. It may also be useful in solving some of the exercises.[0]
This is not a novel, and you don't learn by just reading things. You need to engage topics like this in hand-to-hand combat, and if you're not willing to do that - give up now. Your knowledge will be superficial, and ultimately useless.

Added in edit: To reply to both (currently) comments, you don't learn how to program by just reading and not doing. You can learn about programming, but you won't actually be able to program, and your knowledge will be superficial. If that's your objective, just to know about this topic, and others in math, then fine. If you want to be able to use your knowledge in any meaningful way, my comment stands. It's not enough just to read. You have to engage, and do the work.

[0] http://category-theory.mitpress.mit.edu/chapter001.html#lev_...

paulpauper 20 hours ago 1 reply      
Category theory , among with algebraic geometry, is one of the most complicated concepts ever developed. Trying to extend it to a wide range of applications seems like overkill. Yes, technically you can use it, but it's like using an archaeology kit to dig a hole when a simple spade will do.
Show HN: Chalkbot hipolabs.com
51 points by tunavargi  17 hours ago   10 comments top 6
em_ 52 minutes ago 0 replies      
I am doing similar drawings with Sandy Noble's Polargraph (http://www.polargraph.co.uk). How do you keep the liquid chalk flowing? Common liquid chalk pens need pumping over time.

art+com created a massive chalkbot at Jelling Museum in Denkmark:https://artcom.de/en/project/experience-centre-royal-jelling...

StavrosK 2 hours ago 0 replies      
Very nice! I made a similar ones with LEGO Mindstorms in three dimensions:


kator 12 hours ago 0 replies      
I built a dual axis laser pointer setup to play with my cat but she looks at the servo's too often because they're so noisy. I was wondering if I could use memory wire to control the laser pointer since it would be quieter.

Now I'm wondering if these stepper motors might be a better alternative, maybe a third one against some sort of a spring loaded harness.

Either way very cool, would be interesting to see if a third motor might have helped rather than relying on gravity and good behavior of the chalk pen.

Qworg 10 hours ago 1 reply      
And here I thought it was the Nike Chalkbot: http://m.youtube.com/watch?v=HmW-eGCrSxs

Still cool though!

chris_st 12 hours ago 0 replies      
That's great! Thanks for detailing what went wrong, as well as what went right.

I've wanted to do a drawbot (paper and pen) for a while, and thought these pulleys[1] and belts[2] might be good.

1: https://www.adafruit.com/products/12512: https://www.adafruit.com/products/1184

rosalinekarr 15 hours ago 1 reply      
I wonder what this would look like if they could speed up the motors and smooth out the motion with some more taught string. Maybe they could recreate more hand-like motions.
How to write a 48-hour game in just 2 years (2013) fistfulofsquid.com
123 points by ingve  19 hours ago   40 comments top 18
Someone 17 hours ago 2 replies      
"48 hours later my iPhone was displaying a black screen with a white triangle on it. Clearly my limits were being tested."

That reminds me of http://rampantgames.com/blog/?p=7745:

"In the main engineering room, there was a whoop and cry of success.

Our company financial controller and acting HR lady, Jen, came in to see what incredible things the engineers and artists had come up with. Everyone was staring at a television set hooked up to a development box for the Sony Playstation. There, on the screen, against a single-color background, was a black triangle.

Its a black triangle, she said in an amused but sarcastic voice. One of the engine programmers tried to explain, but she shook her head and went back to her office. I could almost hear her thoughts Weve got ten months to deliver two games to Sony, and they are cheering over a black triangle? THAT took them nearly a month to develop"

chipsy 13 hours ago 0 replies      
I've written many 48 hour games. I've also written games that drag on for months or longer without apparent progress.

The difference literally comes down to whether you are doing easy things or not. Having an engine or framework does help make a variety of things easy, but it does nothing for the one or two features that aren't. Eventually you hit a wall where it takes forever, and that's your next month. You get over the wall and then a flood of other new features come in almost instantly. Also in the same ballpark are features that you have coded before and are familiar with, vs. ones you aren't. You can get a lot done "from scratch" by spamming preexisting knowledge at the problem, but it still takes time and it isn't exactly easy either.

Last of all, at first clone-and-modify is enough to feel interesting. So you go very quickly, because you care little about the result. But after a few dozen times doing that, you're done, and you want to expand the parts you care about. That creates more barriers to get over, more months where progress is slow because your ambition is big enough to no longer follow the easy path. More months where problems are on the content development side, not the runtime. That part is always difficult. Scope is deceptive.

joeyspacerocks 14 hours ago 2 replies      
Hi - author of the post here. Just a quick note - don't believe a word of this. I wrote it in 2013 and am now 3 years into the dev of the next game.

It's lies, all lies :) actually it's probably solid advice - the hard bit turns out to be following it.

The missing piece of advice turned out to be - "stay disciplined" ...

shubhamjain 2 hours ago 1 reply      
I made my 'stupid' game - "Penguin Walk"[1] an year back over a period of three months. Like the author, there were periods of "I don't feel like working on it" but in the end, I pushed myself to finish it. Huge chunk of my time was wasted in resolving timing errors as I built it from scratch which could have been a better had I used a Game Engine.

[1]: http://shubhamjain.github.io/penguin-walk/

veddox 18 hours ago 5 replies      
Thank you for sharing this!

> Don't build an engine instead of a game

Only too true... That's a trap I always fall into. I guess programmers are so hard-wired to abstracting problems they that need solving they end up doing a lot of abstracting and very little solving. (Compulsory xkcd: https://www.xkcd.com/974/)

nathan_f77 1 hour ago 0 replies      
I just made a little iOS game too! Actually this time I managed to stay on track and get everything done in about 1 month. I honestly don't know how I did it, but it's really fun to finish a project. http://boopsboopsswoops.com/
lubesGordi 17 hours ago 0 replies      
It can take a long time just to get the opengl type programming understood. I've been studying this: http://lazyfoo.net/tutorials/SDL/index.php which I think does a terrific job explaining many of the important concepts while at the same time giving you useful outline code that you can use to quickly prototype your game. Best of all you supposedly can port this to many devices (it's an SDL tutorial).
Namrog84 17 hours ago 1 reply      
I recently released my first real attempt at a small game beyond the asteroids clone scopd. And what I feel like I could do in 2 weeks. Took a full about a year. Several month breaks and many things just being tremendously more time consuming than projected. Fully appreciate and understand this. Part time game making is difficult especially for new game devs.
kpwagner 16 hours ago 0 replies      
"By the time I'd downloaded the new version and fixed various issues it identified with the project format I'd run out of time and energy to continue."

Haha. About a year ago I went through 6 hours of updates to get a python script to create mouse and keyboard inputs (moving the mousing and typing to interact with gui apps). After that I called it a day and have barely touched the project since.

asciimo 17 hours ago 1 reply      
Here's a short video of QB1-0's gameplay: https://www.youtube.com/watch?v=JVVnw2b-s1U
Tempest1981 16 hours ago 0 replies      
> Always have the next task ready and divide the work up into little chunks. The next time you find yourself with a bit of spare time you're all set and ready to roll.

This is a great technique, but takes some discipline. Reminds me of GTD.

engine_jim 14 hours ago 0 replies      
Your experience sounds similar to mine. I used to write games from scratch in C/C++ and I would quit the projects after I wrote most of the engine code (which would take a few months), and had something presentable.

Back then there weren't engines like Unity and Unreal that solve most of the technical problems like rendering, and physics/collision detection. Even with these tools though, I still find it difficult to finish a project quickly because I spend so much time writing infrastructure code. For that reason I never do hackathons.

kiba 16 hours ago 0 replies      
I got a similar story for a mod I am working on in factorio. Unfortunately, I have yet to see anybody using it in any real capacity(including myself, ironically).

The reason is simple, I believe? Nobody knows how to use it beyond myself.

Even if that isn't the case, I still need to work out a tutorial anyway.

cocktailpeanuts 7 hours ago 0 replies      
"Know when to stop" <== This really cracked me up because it's not only true but also put in such an eloquent expression.
ww520 8 hours ago 0 replies      
Starting a project or prototyping a project is easy. Finishing a project is hard. Finishing a game is especially hard.
Kiro 11 hours ago 0 replies      
> Despite your experience you really aren't going to build a MMORG. You really aren't. You're never going to finish it.

I did and it was my first game. The sentiment definitely holds true for most though.

augbot 11 hours ago 0 replies      
Really fun game!
mungoid 14 hours ago 0 replies      
I could swear you were talking about me. Good post!
Introducing Vulkan-Hpp Open-Source Vulkan C++ API github.com
123 points by gulpahum  20 hours ago   63 comments top 9
overcast 20 hours ago 1 reply      
Enjoying Vulkan so far in the recently patched Doom, which netted me a minimum 10% increase in frames per second. I really hope this gets adopted / patched into current games, and those on the horizon. Seems like a big win for cross platform development.
gulpahum 6 hours ago 1 reply      
The new Vulkan C++ API from Khronos is based on vkcpp. Here's examples for vkcpp to give an idea how code with the C++ API looks like:


Here's a basic tutorial with comments for the Vulkan C API. Vulkan is a very low level API, so there's a lot of code. It should be straightforward to port the C tutorial to use the C++ API.


Hello71 13 hours ago 2 replies      
So hold on. Assuming that I understand C++ linking right, you're saying that I should include this 17000-line file in every single cpp unit in my application that uses Vulkan APIs? And people wonder why C++ programs compile so slowly.
izym 15 hours ago 0 replies      
This is what was previously the vkcpp project from Nvidia, which seems to have been transferred to the Khronos group.
BinaryIdiot 8 hours ago 1 reply      
Whoa now that is cool! Are there any games out there that currently use this C++ API of Vulkan? Very curious to know if there are any major issues with using it versus the direct C API.

I'm not very knowledge regarding Vulkan so hopefully that isn't a stupid question but I want to brush up on my C++ skills and play with this!

Also how similar is Vulkan to SDL? I used to use SDL quite a bit back in the day and it was awesome but I'm assuming Vulkan is far more comprehensive?

kayamon 18 hours ago 2 replies      
A lot of their "improvements" could be done just by using C99 designated initializers.
criddell 19 hours ago 1 reply      
Are there any scene graph libraries that use Vulkan?
riotdash 19 hours ago 3 replies      
As a game developer I'm pretty excited about this. There is really no reason to use DX12 backend anymore in the future game development, because of the ease of development, performance and multiplatform features of the Vulkan.
lubesGordi 17 hours ago 6 replies      
Vulkan is an NVIDIA gpu api. https://www.khronos.org/vulkan/
Why Im Suing the US Government bunniestudios.com
1820 points by ivank  2 days ago   300 comments top 36
DoubleGlazing 2 days ago 7 replies      

My wife is a speech therapist and uses a system that is designed to help people who have had strokes regain their voice.

It comprises a piece of software that comes with a "specially calibrated USB microphone". The microphone is actually a Samson laptop USB mic that had the voice improvement systems logo stuck on it.

The system came with lots of legal warnings about not copying, not telling unqualified people about how it worked and not to use an unapproved microphone. The DMCA was specifically mentioned.

One day the mic failed (the program requires patients to shout aggressively at the mic) so my wife went off looking for a replacement. We had a few USB mics that we tried and and the application refused to acknowledge their existence even though they showed up in Windows. It became obvious that the software was checking the USB device ID. My wife went to the company that ran the system to get a replacement, but they said she had to buy a new copy of the software as well - total cost $659. So we took a chance and ordered a new Samson USB mic from Amazon for 30.00, but when it arrived it didn't work. It was the same model, but was a few generations ahead and therefore had a different USB device ID. My wife has some colleagues with the same package so I tested their mics and they had different USB device IDs and it became obvious that when Samson released a revision of the mic the company offering the system simply recompiled the code with he new device ID baked in and then re-branded the mic.

So, not wanting to shell out $659 for a whole new package I took the old and new mics apart desoldered the cartridges from both mics and put the new one in the body of the failed mic. It worked! Now technically this would be a violation of 1201 in the sense that the individual copy of the software they sold you was tied to the specific mic they sold you at the same time - they said so in the EULA. But lets be honest that's just nonsense. They were simply trying to sell more stuff - a tactic that seems fairly common in various fields of professional therapy.

This is the sort of problem caused by 1201. If we lived in the US we would have been in breach of the DMCA even though we copied nothing.

Also, the software is as ugly as sin.

hlandau 2 days ago 2 replies      
This post about the damage inflicted by 1201 reminded me of another 1201: Halon 1201, banned because it depletes the ozone layer. A serendipitous coincidence, with this post talking about 1201 like an ecological threat.

More seriously, the GPLv3 contains an interesting provision. Search for "Anti-Circumvention" in this to find the section: https://www.gnu.org/licenses/gpl-3.0-standalone.html

The second paragraph is probably enforceable, but I'd be interested to hear from someone suitably informed whether the first paragraph has any basis. How far can it be taken?

For example, one of the most insidious things about the Blu-ray format is that unlike DVD and HD-DVD, commercially pressed video Blu-rays are obliged to use AACS. Theoretically non-AACS discs could be pressed and work, but the replication plants aren't _allowed_ to print non-AACS video Blu-rays. This has caused some consternation where people want to distribute Creative Commons/etc. video on optical media, more than can fit on a DVD. I think I recall Archive Team talking about just having to resort to putting video files on a data Blu-ray instead.

If someone made a film, put "Neither this work nor any derived work can constitute an effective technological measure for the purposes of the WIPO copyright treaty or any corresponding legislation" in the credits, and then someone else got AACS'd Blu-rays made of it, would 1201 thereby not prohibit breaking AACS specifically in the context of that Blu-ray? It seems rather dubious.

benmarks 2 days ago 0 replies      
onetwotree 2 days ago 3 replies      
Good luck!

What's kind of cool about this issue is that it attracts support from citizens of all political stripes - whether you're a farmer who just wants to be able to fix his own damn tractor, or a hacker who wants to futz with proprietary hardware, the law is patently bogus.

Unfortunately, farmers and hackers have far less political influence than corporations. Hopefully by pursuing this through the courts and with adequate resources from the EFF some progress can be made that couldn't be in congress.

rayiner 2 days ago 9 replies      
Circumvention by itself definitely shouldn't be illegal, and it's probably unconstitutional to make building and researching circumvention mechanisms illegal. But I don't buy Step 2.

> EFF is representing plaintiff Andrew bunnie Huang, a prominent computer scientist and inventor, and his company Alphamax LLC, where he is developing devices for editing digital video streams. Those products would enable people to make innovative uses of their paid video content, such as captioning a presidential debate with a running Twitter comment field or enabling remixes of high-definition video. But using or offering this technology could run afoul of Section 1201.

It definitely should be legal to build those products. Maybe it should be legal to distribute that captioned video as fair use. But why should Twitter profit from a user captioning a video CNN created?

That's the part I have trouble with here. Fair use is fine and good, but there is a large universe of very profitable companies that don't make content of their own, but profit from other peoples' content. Of course they have a huge interest in weakening copyright protections under the guise of promoting fair use.

unabst 2 days ago 2 replies      
What we need is the legal right to fork any IP. An open licensing model where no one needs permission. They just need to maybe pay an IP tax that trickles up to the previous contributors that helped produce what was forked.

IP is completely flawed because it grants a monopoly on the fruits of specific knowledge or a work as if they are static end products, whereas in reality anything that is not evolving is dying. So the law restricts progress to the owners of the IP even when we could all contribute. And when there is incompetence or negligence by the owners, we have a situation where something good is ruined or withheld, with anyone fixing it being illegal.

Removing IP is impossible because it's about profit, which is also a right. What we need is a new revenue system based on new principles of an expectation of progress and open contribution. Open source software and hardware is this, but just without any standard profit model backed by law.

ethanpil 2 days ago 0 replies      
If something isn't done about this very soon, people will never remember or know what used to be. Most (many?) of us here have used VCRs, tape recorders and CD burners, etc, and understand what he is talking about when we remember the days when we had freedom to own information.

Today's kids have been well trained by Apple, Google and Netflix and hardly even understand what we are talking about.

"Oh, you don't have an iPhone anymore? Just buy it on Google Play and you will have it again on your Galaxy." is a quote I have heard more than once...

dikaiosune 2 days ago 2 replies      
If you're in this thread to support this EFF-backed action, I would strongly consider donating to a cause you support:


DanBlake 2 days ago 3 replies      
Unfortunately I believe that even if the suit was successful, we would just see more purchases become 'perpetual licenses', skirting the updated law. IIRC, Tesla was very heavily against letting anyone tinker and went to some extremes to stop it. It wouldn't surprise me in the least to see them make buyers sign a EULA in the future when you go to 'purchase' a vehicle.
forgotpwtomain 2 days ago 3 replies      
I am curious why, if they actually believe they have a good chance of success, this is only being filed now rather than in prior years? Has something changed?
mrmondo 2 days ago 1 reply      
I fully support your cause.

I'm not an American and do not live in America but the problems with American (copyright) laws unfortunately affect the world on a global scale. I sincerely wish you all the best in your efforts and hope that other organisations as well as the (fantastic) EFF back you.

I stand behind you.

filoeleven 2 days ago 0 replies      
A quick summary for those who don't want to click through without knowing what the lawsuit challenges:

Section 1201 contains the anti-circumvention and anti-trafficking provisions. These infringe upon fair use activities like format conversion, repairs, and security research.

dang 2 days ago 0 replies      
A related article by Matthew Green is at https://news.ycombinator.com/item?id=12137437, and by the EFF at https://news.ycombinator.com/item?id=12136682.
thinkMOAR 2 days ago 1 reply      
If only they were the bully on the school playground perhaps you could fight him. But they are the playground, i wish you the best of luck.
lifeisstillgood 2 days ago 0 replies      
The UK government is trying to push for OSS as the default for all government software. As a default for all "societally beneficial"'software is a better goal and one highlighted here.

Now my attempts so far are stymied by this weird half world. Most government contracts basically want either bums on seats contractors or to fundamentally hire "someone who has done it before" (effectively the same as wanting to buy off the shelf)

So there is almost no way to seed fund the initial OSS development.

Down thread people talk about a fund for starting OSS projects to provide things like this. Plover is an example of people trying it on their own - but a funded system that basically follows current gov work seems better.

SilasX 2 days ago 1 reply      
Can someone do a tl;dr? This is upvoted very highly but it's assuming a ton of context I don't have. All I get is that someone wants to be able to tinker, but today that necessitates breaking some legally-enforced protections on the product.

That's a valid point but I don't see how it's gotten to 1000 points, so I think I'm missing something. What's the lawsuit? What's the egregious use case?

markokrajnc 2 days ago 2 replies      
"Our children deserve better." If you take children - they indeed mix and remix without worrying about any (copy)rights...
reddytowns 2 days ago 0 replies      
You know, no one asked you tech people from getting involved in law making. Nowadays, a law maker can't seem to do anything at all without some techie crying foul. Their argument always is some nonsensical technobabble, which the courts can't really understand anyway, often giving in to their demands just to get them to go away.

And it's such a shame, too, since those laws were bought and paid for by lobbyists, and what does it say about the rest of the country if one can't expect to get what one pays for when lobbying at the highest level of government?

tomc1985 2 days ago 0 replies      
Doesn't the US dismiss most lawsuits filed against it out-of-hand? Wasn't that why that class-action on behalf of the Japanese concentration camp survivors was such a landmark case?
shmerl 2 days ago 0 replies      
Great. DMCA-1201 was always unconstitutional and was in practice used to stifle free speech. Good to see EFF actually bringing it to legal fight. It should be repealed completely.
ankurdhama 2 days ago 0 replies      
The problem is this new business model where they don't just sell you things/stuff, rather they also sell you "specific rights" along with the stuff. The usual things like you cannot do this or that with the stuff that you bought from us. The sole purpose is to keep earning money even after the one time deal of buying the stuff.
LELISOSKA 2 days ago 0 replies      
This entire cause is a sham, beyond belief, a cause that seeks to degrade the value of creative thought and intellectual property.

Before we get into socioeconomic barrier discussions I am a former disabled homeless person who is how the founder of one of the most powerful environmental activism groups in the country. I started out with nothing and worked myself to where I am, using original and creative thought and at no time have I ever needed anyones intellectual property to build myself to where I am.

The Electronic Frontier Foundation, that supports this complete bullshit erosion of the rights of content creators everywhere, does nothing in this world but fight for causes that continually reduce the market value of original ideas.

They claim to fight for things like free speech but what they really fight for is the rights of anonymous hate groups to steal your photos and write nasty messages on them. They fight for the rights of the meek to inherit the Earth so they can then destroy it with their abject failures.

Look to the recent lawsuit Google v Oracle, where Oracle sued Google over the use of their software in Android. Google avoided billions in liability and it was all thanks to the work of the EFF, who suck off the teat of Silicon Valley and protect their billionaire buddies from financial liability, and then they support little guys like this so they can continue their 1% supporting ruse.

I look forward to watching this mad grab at free intellectual property get slapped down by Washington DC. This is not about fighting the government, this guy is a puppet being used by the power that be in Silicon Valley in order to allow companies like Google to continue to rob, loot, and pillage other peoples intellectual property without financial liability.

BenedictS 2 days ago 0 replies      
I've made an account just to wish you good luck! You're a great man for doing this and I'm glad EFF is on board.
maerF0x0 2 days ago 0 replies      
IME many US people do not resonate with the creativity arguments, but do with the freedoms. The land of the free lately doesnt feel like it and I think many US people are feeling it too. It may help to phrase your arguments in the wording that the constitution is meant to protect -- in terms of freedom.
chejazi 2 days ago 0 replies      
This reminds me of a new patent Apple filed to disable video recording on iphones. Would winning this suit prevent that from being enforceable?
amelius 2 days ago 1 reply      
I wonder how much he budgeted for this series of lawsuits.
hackaflocka 2 days ago 0 replies      
DJ Drama, the mixtape guy, was raided under the same law. It's an interesting story, google "dj drama raid"
spacemanmatt 2 days ago 0 replies      
The whole DMCA is a steaming pile, but I guess I'm ok with piecewise dismantling.
wonkyp2 2 days ago 0 replies      
I cackled at the former, homeless vegan (or thereabouts) who started a shitstorm in the comments.
known 2 days ago 0 replies      
I'll support;
blastrat 2 days ago 2 replies      
yes I agree, and also, what? why should PP's question be downvoted-to-hell?: He's entitled to defend the other side here.

Not saying you did it, but I had to comment someplace.

paublyrne 2 days ago 3 replies      
Some people just, you know, read the article.
magice 2 days ago 8 replies      
I do appreciate the effort to protect everyone's constitutional right. I wish best of luck to the pursuit.

However, I feel like there is something very very wrong about method and intention of this type of actions/complains.

One thing always bugs me about Americans: despite the liberties that they enjoy, despite the very real capacity to impact change in their government and laws, they all hate "the Government." Who is "the Government"? Wait, ain't them the very candidates that you the people vote into offices?

Like this idea of "suing the US government." Who are you suing? The executive branch? Why are you suing them? This is over a law. It's a piece of legislation. The executive branch merely, you know, execute the laws. Why not sue Congress? Oh wait, why sue Congress when you can simply vote them out of office? Oh wait, why "stop enforcing" the laws when you can, you know, CHANGE the laws?

This kinda reminds me of the libertarians' ideas of obstruction of legislation so that "the government does not spend more." If not spending is the right thing to do, why not educate people that. Even if one believes that 47% of the population is "takers," 53% is still a majority. So teach, advocate, change minds. But no, they prefer to obstruct their country, risk the centuries of their national reputation, put t heir fellow citizens to starvation. You know, if this happens in schoolyards, we probably call it "bullying." But if a bunch of libertarians do it, it's "principles."

Obviously, I agree with the plaintiff here. However, the method is still wrong. And different from above, there are very few "takers" here. Mostly, it's faceless businesses that (let's be frank here) few people like. So why not take the high road? Why not educate your fellow citizens on the danger of the laws? Why not change minds? Why not raise money for candidates who will change the laws appropriately?

In short: why not be a citizen rather than a rebel? Why not change the system for the better rather than obstruct it? Why not make your society/country a better place rather than simply fight it?

ryanswapp 2 days ago 13 replies      
I studied section 1201 thoroughly during law school and I think this post doesn't give a fair characterization of it. The reason this statute exists is because companies were unable to devise protection for copyrighted works that hackers were not able to immediately circumvent. As a result, the government stepped in and created 1201 to make it illegal for someone to circumvent some form of access control that a company used to protect their copyrighted works. The purpose of the statute isn't to destroy <insert Internet activist claimed right> but is to make it much less expensive for a company to protect its products. I don't see anything wrong with that.
olympus 2 days ago 2 replies      
I think this is an important topic that needs to be addressed, but suing the government is doomed to fail. The federal government has sovereign immunity, and you can't sue them unless they decide that you can. They usually decide that you can't. Most laws aren't changed in the court unless someone is criminally prosecuted. Then your appeal case can move through the higher levels of the court until it reaches a level that the law can be struck down completely, or what usually happens is a legal precedent is set regarding a specific portion of the law.

So unless Bunnie has been prosecuted for breaking the DMCA, this is likely going to be an ineffective move.

If you want to change a law without breaking it first, the right way to go about it is petitioning Congress, the lawmaking part of the government.

6stringmerc 2 days ago 3 replies      
Let's take a quick look at the understanding of Copyright law that this litigant seems to possess:

>Before Section 1201, the ownership of ideas was tempered by constitutional protections. Under this law, we had the right to tinker with gadgets that we bought, we had the right to record TV shows on our VCRs, and we had the right to remix songs.

Wait, before the DMCA "we" had the right to remix songs? Okay so this case is going nowhere because the person filing really doesn't quite understand the mechanics of basic Copyright. Just kind of throwing out the concept of "remixes" does a dis-service for the real nuances of how the rights/permissions/compensation system works, has been tested in court, etc.

The subject of ownership and repair is extremely complex and this lawsuit is frivolous when the matter is being actively tested by John Deere and various farmers. Maybe this person could assist in funding that challenge to 1201. There are some glaring flaws in this whole approach, from what I understand about Copyright law and the DMCA.

Also, I don't know why the EFF continues to push erroneous information regarding how Copyright, the DMCA, and Fair Use actually work:

>This ban applies even where people want to make noninfringing fair uses of the materials they are accessing.

Fair Use always trumps the DMCA; the nature of Fair Use, however, is subject to four factor tests, if an IP owner should feel compelled to assert the Fair Use was not in the spirit and letter of the law. Sometimes it seems like the EFF and TechDirt try to claim things that aren't true just to make a point. It's something that bothers me routinely in this subject in particular.

Failsafe failure handling with retries, circuit breakers and fallbacks github.com
101 points by jodah  19 hours ago   40 comments top 8
SwellJoe 12 hours ago 1 reply      
This title would be 100% better with "for Java" on the end.
nitrogen 16 hours ago 1 reply      
Very cool. Consistent and clear retry, backoff, and failure behaviors are an important part of designing robust systems, so it's disappointing how uncommon they are. If I were starting a new Java project today I would almost certainly want to use this library instead of the various threads and timers I had to hack together years ago.
ckugblenu 18 hours ago 6 replies      
Quite interesting. It shows potential to be used in numerous use cases. Anyone know of similar projects in other languages like Python and Javascript?
cpitman 9 hours ago 1 reply      
How is this distinct from Hystrix (https://github.com/Netflix/Hystrix)? Why should I use one over the other?
mandeepj 6 hours ago 0 replies      
Please find some of these patterns for .net\azure\c# stack here - https://msdn.microsoft.com/en-us/library/dn568099.aspx
dredmorbius 16 hours ago 3 replies      
A note on the name: "fail-safe" in engineering doesn't mean that a system cannot fail, but rather, that when it does, it does so in the safest manner possible.

The term originated with (or is strongly associated with) the Westinghouse railroad brake system. These are the pressurised air brakes on trains, in which air pressure holds the brake shoes open against spring pressure. Should integrity of the brakeline be lost, the brakes will fail in the activated position, slowing and stopping the train (or keeping a stopped train stopped).


Fail-safe designs and practices can lead to some counterintuitive concepts. Aircraft landing on carrier decks, in which they are arrested by cables, apply full engine power and afterburner on landing. The idea is that should the arresting cable or hook fail, the aircraft can safely take off again.


Upshot: "fail safe" doesn't mean "test all your failure conditions exhaustively". It may well mean to abort on any failure mode (see djb's software for examples). The most important criterion is that whatever the failure mode be, it be as safe as possible, and almost always, based on a very simple and robust design, mechanism, logic, or system.

From the description of this project, it strikes me that it may well be failing (unsafely?) to implement these concepts. Charles Perrow, scholar of accidents and risks, notes that it's often safety and monitoring systems themselves which play a key role in accidents and failures.

fdsaaf 16 hours ago 0 replies      
Beware of runaway retries: https://blogs.msdn.microsoft.com/oldnewthing/20051107-20/?p=...

Personally, I'd rather systems fail quickly, with retries only at the highest (application) and lowest (TCP) levels.

ap22213 14 hours ago 0 replies      
It seems like a well-thought, fluent interface to what lots of Java developers (especially Java 8 ones) inevitably have to write themselves.
How banks are refusing to shoulder responsibility for fraud telegraph.co.uk
100 points by walterbell  22 hours ago   60 comments top 6
slv77 10 hours ago 5 replies      
Today gmail offers more account security than a typical bank. Why can I get two factor authentication, device recognition and alerts on a free email account but not on a bank account?

From a risk management perspective it's never a good idea to separate liability from control. If the banks don't provide adequate security controls to their customers why should their customers be liable?

Even though the controls that a customer has are less than what Gmail provides the banks continue to push the the illusion that the customer is actually in control. Even the vocabulary they use implies the customer was always in control.

For example you were a victim of identity theft. How crazy is that? How can somebody steal my identity? Oh, I woke up this morning and I wasn't me!!!

Nope.. Checked my identity and I'm still me. Why did you allow somebody to steal all my money?

JumpCrisscross 1 hour ago 0 replies      
Security and convenience exist on a spectrum.

For my ordinary checking account, I opt for convenience. I don't want transactions randomly declined and I don't want to have to wait for banking hours to authorise activity. To compensate, I limit the amount I keep in the account.

For certain other accounts, I opt for more security. Cheques are blocked; foreign transactions are, by default, blocked; online banking must be two-factor authenticated every time; transfers must be authorised with a phone call below certain amounts and in person at a branch, with ID and a passphrase verified, above certain amounts; et cetera. These are flags one can have enabled on most bank accounts. They're just debilitatingly irritating for ordinary use.

If you make banks responsible for user-authorised fraud, e.g. a customer wiring money to a scammer, you're also asking them to nanny you. Freedom and protection from your own stupidity exist on a spectrum.

siliconc0w 16 hours ago 1 reply      
I suspect it's a game theory situation - until banks are unilaterally made responsible at once (say by a new law) - none want to be the ones to make their workflows more complex and invest in better ways to remotely authenticate an identity.
compil3r 5 hours ago 1 reply      
None of this will change until bankers start getting prosecuted.
nxzero 19 hours ago 3 replies      
Aside from the reference to sharing a PIN, I missed "How banks are refusing to shoulder responsibility for fraud".

What am I missing?

More importantly, the customer referenced in the article basically wired all the funds in her account to a scammer then asked the bank for it back. Sorry, but that is the customer's fault, not the bank's fault.

known 6 hours ago 0 replies      
Banks/Politicians have privilege; They will not be prosecuted; They can commit crimes in the name of serving the country; http://cnbc.com/id/43471561
Ruining the Magic of Magento's Encryption Library openwall.com
126 points by based2  21 hours ago   39 comments top 7
qwertyuiop924 16 hours ago 1 reply      
Look, I don't care if you use PHP. But if you aren't an expert, and your code hasn't been audited by experts, don't write your own crypto. Use libsodium, or libressl's libcrypto (if you think openssl is a good bet, you really need to see the talk that discusses the reasons for that fork), or libgcrypt. But for crying out loud, use a library that is actively used and vetted by people who KNOW WHAT THEY'RE TALKING ABOUT. And do your best to know what you're talking about, too.
marcrosoft 18 hours ago 0 replies      
Almost the entire Magento code base qualifies for an example of code horror.
spilk 11 hours ago 3 replies      
Aside from the obvious deficiencies, who the heck writes code like "if ($false === initVector)"? Is this a side effect of people who speak other languages with different grammatical structures? Very odd to read through code written "backwards" like that.
api 19 hours ago 3 replies      
I'm not going to say don't write crypto. Instead I will give more useful advice: if you do write crypto, make it boring. Really boring. Do not invent anything. Do not be creative. Use an established modern construction with modern ciphers and use them in only one way. That one way should be the way cryptographers recommend with no deviations.
davidgerard 3 hours ago 0 replies      
Magento is incompetent on much simpler levels than this.

1. Someone thought chmod 777 was a good idea, ever, under any circumstances. Not only is this standard practice in Magento installs (it's a how-to step in many books on Magento), it's all through the actual codebase.

The below is from the Magento Enterprise tarball, downloaded from the company (and I double-checked this after someone questioned this last time I brought this up):

 $ grep -r chmod .|grep 777 ./downloader/lib/Mage/Backup/Filesystem.php: chmod($backupsDir, 0777); ./app/code/core/Mage/Compiler/Model/Process.php: @chmod($dir, 0777); ./app/code/core/Mage/Install/Model/Installer/Console.php: @chmod('var/cache', 0777); ./app/code/core/Mage/Install/Model/Installer/Console.php: @chmod('var/session', 0777); ./app/code/core/Mage/Install/Model/Installer/Config.php: chmod($this->_localConfigFile, 0777); ./app/code/core/Mage/Catalog/Model/Product/Attribute/Backend/Media.php: $ioAdapter->chmod($this->_getConfig()->getTmpMediaPath($fileName), 0777); ./app/Mage.php: chmod($logDir, 0777); ./app/Mage.php: chmod($logFile, 0777); ./lib/Zend/Service/WindowsAzure/CommandLine/PackageScaffolder/PackageScaffolderAbstract.php: @chmod($path, '0777'); ./lib/Zend/Service/WindowsAzure/CommandLine/PackageScaffolder/PackageScaffolderAbstract.php: @chmod($path, 0777); ./lib/Zend/Service/WindowsAzure/CommandLine/PackageScaffolder/PackageScaffolderAbstract.php: @chmod($path, 0777); ./lib/Zend/Cloud/StorageService/Adapter/FileSystem.php: chmod($path, 0777); ./lib/Varien/Autoload.php: @chmod($this->_collectPath, 0777); ./lib/Varien/File/Uploader.php: chmod($destinationFile, 0777); ./lib/Mage/Backup/Filesystem.php: chmod($backupsDir, 0777); ./errors/processor.php: @chmod($this->_reportFile, 0777);
2. The company thinks there's nothing wrong with storing money as floats: https://github.com/magento/magento2/issues/555

The way we eventually dealt with hosting Magento (which we had strongly advised against) was a concrete sarcophagus and a thirty-kilometre exclusion zone:

* a cron line specifically to remove o-w permissions from all files in the webroot every minute (which is very inelegant, but the alternative is maintaining our own patches to core).

* Files not owned www-data, except where Magento must be able to write to them.

* deploy all webroot files as a user the webserver can't write.

* cron.sh (Magento's internal cron) runs as root out of the box. We ran it as www-data.

* AppArmor to keep Magento from ever, ever being able to pull shit. This caught Magento's more antisocial tendencies on more than one occasion.

* Admin login: use a path other than "/admin" to foil quite a lot of attack bots at the very simplest level.

We have outsourced our remaining Magento, thankfully, and I don't personally have to maintain the above any more. (You know you've been administering Magento a bit long when you can hum along to bits of "Metal Machine Music" accurately.)

The use case for Magento is (apparently deliberately) confused. It's an unholy melange of a CMS and a shopping basket. There is no good out-of-the-box experience; in practice it's a job creation scheme for consultants.

Even crap-tier "well technically I can tell my boss's boss we have paid support" support, with a four-day response time for them to ask you a simple question you already put the answer to in the original ticket, is swingeingly expensive. I can't say what we're paying for this standard of quality, but I can say that it's public knowledge that Magento is at least $13k/yr: http://web.archive.org/web/20120215011525/http://www.magento...

The problem Magento seems to solve is when the business wants a quick site without developer involvement. After a few other abortive platforms (Plone, Drupal - which are both fine for what they are, in ways Magento just isn't, but didn't end up matching our needs), our eventual solution to this was Wordpress, which we have outsourced so I don't have to think about that either. Outsourced Wordpress with securing it being the host's problem is totally the right answer.

I don't have a good answer on the shopping basket, but Magento was bad enough at that too that we went back to our in-house homerolled system.

I understand some work has gone into Magento 2.0 to make it less mind-bogglingly horrible.

Theodores 9 hours ago 0 replies      
> Magento, one of the largest open source e-commerce platforms, ships a broken cryptography library that clueless developers are probably using to encrypt your credit card information for their client's customers.

Nope. There is a built in credit card method that would utilise the Magento crypt() but you cannot actually hook this up to a bank and get any actual money. The lamest of lamest Magento developers will use an off the shelf payment gateway that will use things like iframes so that even a non https Magento site will not have any credit card information pass through Magento, instead you get payment references.

In theory you could break the crypto and get into Magento admin, to then export out customer email and address details. You could probably refund all customers orders but not get fresh money out of them.

I appreciate the code may be 2008 vintage but there is no new vulnerability here that gives any means to access any Magento credit card data in a meaningful way, e.g. a lucrative way.

kdbuck 18 hours ago 6 replies      
I think it's great that stuff like this is brought to the surface, but it also troubles me that the author makes no mention of submitting a PR to improve the _open source_ code base... They seem more content to discuss boycotting and abandonment instead.
       cached 24 July 2016 13:02:01 GMT