Most of their load is presumably positional updates. Uber wants both customers and drivers to keep their app open, reporting position to Master Control. There have to be a lot more of those pings than transactions. Of course, they don't have to do much with the data, although they presumably log it and analyze it to death.
The complicated part of the system has to be matching of drivers and rides. Not much on that yet. Yet that's what has to work well to beat the competition, which is taxi dispatchers with paper maps, phones, and radios.
There are 20+ complex tools listed in the stack, and to run a high-visibility production system would require high level of expertise with most of them. Docker, Cassandra, React, ELK, WebGL are not related in required skills/knowledge at all (as, for example, Go and C are). Is it 5 bright guys and girls managing everything, like the React time within Facebook? Or a team dedicated just to log analytics?
Interesting to see Google maps being used, isn't that blocked in mainland China?
I wonder what's the architecture of the app and the API for this.
I only wish LE would treat CAN-SPAM seriously and put more sources into criminal enforcement.
> does my model perform worse than the true model?
Do people really expect R^2 to measure the fit of the model to the true model? R^2 measures the fit of the model to the data: i.e. how well does the model perform in predicting the outcomes. In his first example is clear that all the models are equally useless: the noise dominates and the predictive power of the models is close to zero. In the second example the predictive power of all the models has improved, because there is a clear trend. The true model predicts much better than the others now, but each model predicts better than in the previous example.
In the first example, he concludes: "Even though R^2 suggests our model is not very good, E^2 tells us that our model is close to perfect over the range of x."
Actually our model is "better than perfect". The R^2 for the linear model (0.0073) and for the quadratic model (0.0084) is slightly better than for the true model (0.0064). Of course this is not a problem specific to the R^2 measure (the MSE for the linear and quadratic fits is lower than for the true generating function) and can be explained because the linear and quadratic models overfit. E^2 is essentially the ratio the 1-R^2 values (minus one). We get -0.00083 and -0.00193 for the linear and quadratic models respectively (the ratios before substracting one are 0.9992 and 0.9981).
In the second example,"visual inspection makes it clear that the linear model and quadratic models are both systematically inaccurate, but their values of R^2 have gone up substantially: R^2=0.760 for the linear model and R^2=0.997 for the true model. In contrast, E^2=85.582 for the linear model, indicating that this data set provides substantial evidence that the linear model is worse than the true model."
The R^2 already indicates that the linear model (R^2=0.760) and the quadratic model (R^2=0.898) are worse than the true model (R^2=0.997). The fractions of unexplained variance are 0.240, 0.102 and 0.003 respecively and it's clear that the last one performs much better than the others before we take the ratios and substract one to calculate the E^2 values 85.6 and 35.7 for the linear and quadratic models respectively.
(By the way: "well work with an alternative R^2 calculation that ignores corrections for the number of regressors in a model." That's not an alternative R^2, that's the standard R^2. The adjusted R^2 that takes into account the number of regressors is the alternative one.)
I would add a few challanges. The example is a bit a of a strawman - a log(x) function has unique properties that make the Xmax-Xmin vs R^2 work like that. In real data, rarely does a single-variable 'true model' fit as well as the example either.
Context is needed as well - depending on the use of the model, a linear or quadratic fit may be sufficient even for what is clearly a log dataset. The real failing on only for small values of x, maybe 5% of the range of total values. For this case, a bilinear model could fit quite well for the lower 5%, then the existing model for the upper 95%. It depends on the application. I like this phrase:
"When deciding whether a model is useful, a high R2 can be undesirable and a low R2 can be desirable."
Too often statistics are dominated by 'cutoff' values that people apply blindly to all situations.
What do you think of robust regression methods, where obvious outliers are down-weighted?
It should take between 5 and 10 seconds. (With more than 10 seconds it gets boring.) With very short times, there is a lot of noise and you can confuse the noise with a small signal. You can do benchmarks that are shorter but then you must open a statistics book and read it carefully before reaching conclusions.
Repeat it at lest 5 times. (Preferably in some order like ABABABABAB, not AAAAABBBBB.) With 5 repetitions you can get a rough estimate of the variation, and if the variation is much smaller than the difference then perhaps you can skip the statistic book. Otherwise increase the run time or increase the repetitions and use statistics.
At least once in a while, run the benchmark method against two copies of the same code. Just make two copies of the function and benchmark the difference between them. The difference should be small because the noise will make it non zero. If you can prove that one of the two exact copies is much faster than the other copy, then your benchmarking method is wrong. (This is very instructive, it's much easier to learn about the possibilities of benchmarking noise doing a few experiments than reading all the warnings in the books.)
A fundamental problem with Etherium is contracts between anonymous parties. If the other party in the DAO hack was discoverable, the hack would not have been a major problem.
Executable, rather than declarative contracts, are probably a bad idea. Putting in a virtual machine is a cop-out - it means you don't know how to solve the problem of expressing contracts formally, and are pushing it off on someone else. That someone else will probably botch it, as the DAO did. I've previously suggested that decision tables  would be a better basis for a contract system. This decision table tutorial  is something of an ad for a tool, but it uses as an example the sort of things one would want in a contract.
 https://en.wikipedia.org/wiki/Decision_table http://reqtest.com/requirements-blog/a-guide-to-using-decisi...
This case was a perfect storm of:
- very bad publicity with most people's first introduction to Ethereum, the DAO, being hacked
- Ethereum core team members and miners being affected by the hack
- a large hack, monetarily speaking, in relation to the ecosystem.
- the ecosystem still in its infancy so the system is much open to even considering a hard fork option at all
I mean you can't just outlaw programming mistakes. This is going to happen again
What happens next time when the hack is smaller, say 1 million, and doesn't affect anyone on the core Ethereum team or any miners, ie it just affects regular Ethereum investors.
Does everyone vote again with the expected outcome of the miners not worrying about hte little people?
Does it not even get a vote?
Or have they set the precedent that they always roll back hacks now?
I think Ethereum has some good leadership, but I think this is something that they really need to get a policy on now.
What can be done here?
> Build unstoppable applications
> Ethereum is a decentralized platform that runs smart contracts: applications that run exactly as programmed without any possibility of downtime, censorship, fraud or third party interference.
One should add at the end: "...unless the developers are heavily invested in a contract, in which case they can perform an Ethereum hard fork to take back their investment in case it goes awry. You know, conflict of interest and all that."
Because that is exactly what happened here.
What is being suggested by Ethereum is that you can try an evolution path, see it's not actually working and then rollback - it seems like a good idea at first because you can supposedly iterate fast. But I think Ethereum would lose confidence of investors either way: 1) if you hardfork, you set a precedent and now no one is sure what can be rolled back, when and for what reason 2) if you don't fork - then money is stolen. I seriously see no way out of this. It probably was a good decision to fork, because that at least allowed to save the face and the investment short-term.
Specifically, Ethereum was designed with the explicit purpose of running distributed applications that commit their applicaton state into the blockchain. If this state can be subject to a rollback, in the form of forking from a block in the past -- community consensus notwithstanding -- then then you can never rely on the blockchain as storage.
Now, of course, cryptocurrencies only have value if participants believe in them; and these supposed 'smart contracts' of 'state-in-the-blockchain' work the same way. Since we lack the technological means to prevent forks, we need human stewardship to avoid them, and that's clearly not what Ethereum has done.
Important for what? Is there adaptability without mutability?
Programmers know that if you implement something with the premise of immutability then if mutability occurs the state of your application changes arbitrarily.
Mutability is good for the ones who can rely on an existing provider of centralized governance to make decisions that benefit them. For everyone else it's just plain corruption.
Ethereum pivoted from decentralized governance to centralized governance across borders. They are now solving different problems and I hope they see it.
If "adaptability" comes down to a popularity contest, this probably isn't a viable/stable/ethical currency.
Gold has had a long history of success and it wasn't able to undergo a hard fork. The fiat currency that we live with now has legitimacy due to the fact that you can generally expect its value tomorrow to be the same as today. Imagine a fiat currency hard-fork and what that would do to the credibility of it?
On USV's portfolio site I just found Blockstack Labs (2014), Coinbase (2013). Are there more or recently closed ones?
Oh well, guess I'll just keep trying to build useful stuff on top of ethereum and hope the opinions change (or that this thread is not representative :-)
Same old debate under new disguise.
In that case, the Ethereum hard fork doesn't sound very controversial - even if the devs decided to hard fork for bad reasons later on, no one would be forced to adopt it. They would be "voting with their feet" by sticking to the old chain.
I had some ETH stored on the "old" blockchain. I bought them during all the DAO publicity, but never actually bought any part of the DAO contract.
The client, if you are un-ware, is a massive disk space hog. Because of this, I stopped running it.
Now, because of the hardfork, in order to get my value back out of the old chain and into the new one, I have to attach an external disk to my computer, and use it to store the chain.
Can I transfer out of the old and into the new chain indefinitely? I have no idea (maybe I'm just bad at google). Can I even transfer now today (once the client finishes syncing [it has been going literally all day today])? Don't know.
I get the why of the hardfork, and I'm not saying that I'm going to stop supporting ethereum, just venting a little bit.
Turns out new technology is new! And sometimes corner cases like mine haven't been accounted for!
The big fallacy underlying the hard fork is the idea that bugs like this will only happen this one time and so the benefit of stealing the money back and giving it to community early adopters outweighs the costs to the institutional credibility of Ethereum.
In reality, there will be many, many more bugs in smart contracts where the intent of the coder does not completely match the behavior of the smart contract in the real world.
The Ethereum community, and notably its core team who wrote the code for the hard fork while claiming to be neutral in the matter, has sent a strong message that it will meddle in the outcome of contracts in which there was no VM bug.
Human institutions are relatively vulnerable to corruption. There is all sorts of graft, favoritism, etc., throughout most human institutions. Ethereum, because of the concentration of power among early adopters, is still vulnerable to this sort of corruption. We've seen it happen with the hard fork.
Is it a big deal? Well, the invisible hand should have awarded the spoils of the theft to the talented hacker who exploited the contract. Those who lost money in the DAO are people who followed a herd mentality and did not insist that the smart contract they trusted be vetted.
Formal verification will help somewhat, as will the improvement of coding practice. I saw some code written by someone involved in the DAO that was written in a way that made it hard to understand the side-effects of various calls. I'd highlight this if doing code review for a simple e-commerce cart, and it suggests that semantic clarity and readability were ranked low on the list of priorities, favoring a denser style that is much more demanding of the reader's understanding of the subtleties of the language.
What is missing is the simple idea of insurance. Suppose investors in the DAO had been allowed to buy insurance against the DAO malfunctioning... This could have been written as a simple smart contract "future" and could have been offered by anyone. So long as there was demand for both sides of the outcome, a price would have emerged to insure one's investment.
So I think we're on a slippery slope, most notably because of the silly idea that this was the last smart contract bug that will be highly significant or controversial.
Cryptocurrencies bootstrap by appealing to speculators who don't really care about the principle of how it's supposed to work, they just want to buy it early and wait for it to get big (as many did with BTC but many more wished they had). This is fine, but we saw the same sort of greed infect a lot more people, and a hard fork (bailout) occur soon after. You can call it adaptability, you can call it a bailout, it doesn't matter. The bottom line is that at present Ethereum is still vulnerable to it and will be for a while. Let's hope Ethereum grows to the point where a small cabal of people who made a bad investment (or hold a particular political view) can't undermine the system.
I'd like to learn some of these stuffs. Where should I start? What are some stuffs happening?
We detached this subthread from https://news.ycombinator.com/item?id=12151921 and marked it off-topic.
Unfortunately together with the success of perturbative quantum gravity , this suggests that quantum gravity is just not experimentally accessible. (That's a thought by Freeman Dyson originally.) The formal argument would be, that a lot of different theories lead to the same thermodynamical limit, so that one can not determine the true underlying theory. Or put it in more hacker news terms, the problem is analogous to trying to learn about TCP/IP by looking at the output of black hole simulations, there is the layer of numerical mathematics in between, which is at least very hard to breach.
 Older guestpost by Grant Remmen on Carroll's blog:http://www.preposterousuniverse.com/blog/2016/02/08/guest-po...
 For this discussion, it is basically cheating. You quantize gravitational waves on a classical background geometry. The approach works very well because gravity is weak, it also tells us nothing interesting, because one breaks the relationship between gravity and space time (the central feature of General Relativity) by hand.
*-* "Or, more accurately but less evocatively, find gravity inside quantum mechanics. Rather than starting with some essentially classical view of gravity and quantizing it, we might imagine starting with a quantum view of reality from the start, and find the ordinary three-dimensional space in which we live somehow emerging from quantum information."
- "If we perturb the state a little bit, how does the emergent geometry change? (Answer: space curves in response to emergent mass/energy, in a way reminiscent of Einsteins equation in general relativity.)
Its that last bit that is most exciting, but also most speculative."
- "But the devil is in the details, and theres a long way to go before we can declare victory."
- "In some sense, were making this proposal a bit more specific, by giving a formula for distance as a function of entanglement. "
- "Were quick to admit that what weve done here is extremely preliminary and conjectural. We dont have a full theory of anything, and even what we do have involves a great deal of speculating and not yet enough rigorous calculating."
*-* "Perhaps the most interesting and provocative feature of what weve done is that we start from an assumption that the degrees of freedom corresponding to any particular region of space are described by a finite-dimensional Hilbert space."
- "A finite-dimensional Hilbert space describes a very different world indeed. In many ways, its a much simpler world one that should be easier to understand. We shall see"
My 1 cent:
I don't know enough about this subject but...it's this creative thinking that is desperately needed in the sciences. I'm not trying to tear down others but instead just say that this is what happens when education advances. When enough (abstract) people focus on a subject, we will find a breakthrough.
Quantum mechanics seems to be how the universe really works. Outrageous predictions of quantum mechanics, from the two-slit experiment onward, have been experimentally verified. So it's a sound base for further work. Physics has been stuck for a century trying to reconcile relativity and quantum mechanics. This might be a way forward.
It might even lead to something that's experimentally verifiable.
The seeding work on this theory was published as early as 1967 and 1978.
Publication into arXiv is too restricted, because of the endorser requirement, for the article to be published there.
LogLevel dumpio:trace7 DumpIOInput On DumpIOOutput On
Note: Created certs has a 60 mins of cache (nginx) to improve performance. You don't want to download each certificate for all static files in a single request.
I occasionally need to debug traffic. Not often, but occasionally.
For me, the ~$5 on Cellist (http://cellist.patr0n.us/index.html) was a no-brainer.
Many intercepting proxies like The Fiddler with FiddlerScript and the Burp Suite through Burp Extender can be extended to have any feature you want by writing your own code or leveraging someone else's. Personally the only time I've found myself thinking I might need nginx for a debugging proxy is when I need scale. I'd rather use something that's close enough, write stuff where I need to, then focus on doing really cool things with them like finding vulnerabilities for fun and profit.
These design decisions allow me to provide safe pointer access and avoid all race conditions while teaching programming and concurrency, but they probably incur significant performance loss on certain programs. My hope is that the design constraints they impose on the programmer aren't insurmountable. We'll see.
(More info on the project: https://github.com/akkartik/mu#readme. On its memory model: https://news.ycombinator.com/item?id=11855470. On the deep-copy implementation: https://github.com/akkartik/mu/blob/07ab3e3f35/073deep_copy....)
"An Analysis of Linux Scalability to Many Cores" (https://pdos.csail.mit.edu/papers/linux:osdi10.pdf)
"A number of distinguished scientists have indulged in self-experimentation, including at least five Nobel laureates; in several cases, the prize was awarded for findings the self-experimentation made possible. Many experiments were dangerous; various people exposed themselves to pathogenic, toxic or radioactive materials. Some self-experimenters, like Jesse Lazear and Daniel Alcides Carrin, died in the course of their research."
I am glad that there is at least an attempt here to engage and warn people. (If I were interested in doing this, I would definitely heed the warning myself!)
Having said that, we ultimately have to remember and respect the right to individual self-determination, as well as the incredible value of openness in scientific research.
While I'm not willing to do this to myself, what if these people are onto something, and their experimentation leads to something that enriches our world? (I'm not saying that it's likely, but it is possible.) It's not anyone's place to do much more than warn them, as has been done, and then to let them be.
> Single-session tDCS data in healthy adults (1850) from every cognitive outcome measure reported by at least two different research groups in the literature was collected. Outcome measures were divided into 4 broad categories: executive function, language, memory, and miscellaneous. To account for the paradigmatic variability in the literature, we undertook a three-tier analysis system; each with less-stringent inclusion criteria than the prior. Standard mean difference values with 95% CIs were generated for included studies and pooled for each analysis.
> Of the 59 analyses conducted, tDCS was found to not have a significant effect on any regardless of inclusion laxity. This includes no effect on any working memory outcome or language production task.
The topic has appeared on HN other times, but never getting >=10 comments.
It's ironic to think that given everything we think we know about how the brain functions , such a gross and simple procedure as wiring a battery to your head or inducing a magnetic field would have such strong and even beneficial effects.
If that's all so, could the PoGo devs simply enforce some type of device authentication to 'shut down' these APIs, or otherwise take different steps to make unofficial APIs less compatible/more difficult/effectively impossible?
Overall it's sad that most game mechanics of the original games didn't make it into Pokemon Go. Does anyone know how much time they had to implement it?
The two major categories of cheatifying in Ingress are falsifying one's location and multi-accounting. There's precious little that can be done about the latter, so Niantic focus on banning players that appear to be "spoofing" their location.
Given the wealth of different devices and playing scenarios, immediate detection of GPS spoofing is infeasible. Things like WiFi router locationing idiocy (or even just dodgy GPS antennae) play havoc with the utopian dream of perfect positioning every time. If a player performs actions seconds apart that are separated by thousands of miles then the game temporarily ignores them, but after some time in the naughty corner they can resume play.
Hardy spoofing detection instead depends on longer-term profiling. Ingress has a similar API to Pokmon Go JSON chunks (rather than protobuf) over HTTPS, most fields out in the open but each request from the app includes a monolithic "clientBlob" containing device characterisation. The format of this has been (presumably) reverse-engineered by a few hardy souls but it is certainly closely-protected Niantic knowledge. We could safely assume that it's a proprietary blend of signal strengths, gyroscope readings, touch events and timings, secret herbs and spices etc.
The clientBlobs lend themselves to offline processing. There are conceivably servers continuously trawling through a backlog looking for tell-tale patterns of bad behaviour, but it also provides an audit trail if a particular player is suspected of spoofing. Occasionally Niantic indulges in mass purges, which presumably follow from a new cheat detection heuristic being run on all the collected data for some period. These "ban waves" have a reputation for penalising unusual device configurations (the most recent major wave appeared to target, amongst other things, players with modified Android variants that might mask GPS falsifying code, including cheaper Chinese knock-offs, and Jolla phones running Sailboat).
Occasionally during major Ingress gaming events so called "XM anomalies" there is some level of human supervision to quickly identify and remedy clearly-fraudulent player behaviour, but for day-to-day operations it seems that account termination, so-called "hard bans" and shorter-lived "soft bans" are entirely automated, and based on offline player data analysis.
Getting back to the New Cruelty: the clientBlob was not part of Ingress's initial implementation; for a while after it was introduced was ignored, and then it became mandatory. A similar opaque chunk of data is included in the Pokmon Go requests, so we should look forward to its imminent deployment when Niantic scrape together enough Pokcoins to buy a few new servers for batch processing. At that time these convenient APIs won't have long to live.
* Just a working 3 step tracker * Gym high scores * Display nicknames of gym pokemon
The Python version has been out for weeks now and I thought this was the Golang version.
Also, does this API depend on running on Android?
I vaguely remember stories about game companies legally going after companies making bots.
Would uploading an API like this open someone up to a lawsuit? What about someone uploading a bot or a botting framework?
Each one is available from https://aws.amazon.com/documentation/ -> click service -> click "Api Reference"
If all you've saved is one click, I don't think it's worth it, so what else does this do?
P.S. Agree that not all of the services adhere to the same bar.
One thing people should know about AWS and AWS APIs in general is that it is a ghetto. The ad-hoc nature of most AWS services and their weird interactions is indicative of generally bad design. Even with the AWS Ruby SDK I can barely get anything done without consulting 5 different references about which parameters are required, which are optional, and what the sequence of various calls is supposed to be to get an intended result.
So even though this is useful a cookbook would have been much more useful.
Or did you actually scrape the documentation web pages?
Then you have ancient tools like HFM or BPC that try to "enhance" spreadsheets and spreadsheets consolidations. They don't work. They are barely reliable and they cost a lot of money for no reason (they are simple SQL databases, they can't even be compared to much more complex software like salesforce or similar).
Then Corporations are now being sold this all "big data" thing, which is old by now for tech audiences, but it's just new for most of the big companies nowadays. Unfortunately, while big data has a lot of potential, it further moves away investments on good old small data where probably there is much more ROI to grab simply because solving this problem is not that difficult and not that expensive if you want to actually fix it.
Then now everybody wants to do Predictive, but Predictive won't beat saving hundreds of employees thousands of working hours by improving the efficiency of how we handle data and data insights. You can literally create a new workforce out of the many hours you would save with better efficiency in this area. Without even considering that on top of efficiency, you get much better, more fact based, decisions driven by the overall increase in transparency which you lose when insights are spread around thousands and thousands of files.
There is some light at the end of the tunnel. Software solutions like Tableau go in the right direction, but they do not provide the much (more) needed tools to properly manipulate and manage data and, especially, consolidations and integration. The only way to get out of this mess is to control the data flow but especially to have 1 and only 1 data flow. That means once your insights are approved and locked, every other view should read this data, there should be no manual intervention any more. If the locked data is wrong, it will be wrong for everybody, which is a good thing compared to having multiple copies of the same and then people fighting on which one is the right one.
I haven't used this in production, but messed around with it for a while. Seems to do what this article is suggesting.
Link to the project page: http://people.csail.mit.edu/ebakke/sieuferd/
Source code or it doesn't exist ;)
You should never use user input on unserialize. Assuming that using an up-to-date PHP version is enough to protect unserialize in such scenarios is a bad idea. Avoid it or use less complex serialization methods like JSON.
Surprised that worked. Guess they got lucky and either got the comiler+optization flags the same as the PHP binary used, or the release process can create higly similar builds.
my $php_code = 'eval(\' header("X-Accel-Buffering: no"); header("Content-Encoding: none"); header("Connection: close"); error_reporting(0); echo file_get_contents("/etc/passwd"); ob_end_flush(); ob_flush(); flush(); \');';
2. if i understand the exploit correctly they got remote code execution by finding the pointer to 'zend_eval_string' and then feeding the above code into it. doesn't that mean the use of 'eval' in the code that is being executed is unnecessary?
From a legal perspective how do companies and hackerone create a binding exemption from laws used to prosecute hackers?
> Dump the complete database of pornhub.com including all sensitive user information.
And of course leak the data to expose everyone that participates in this nasty business. It is such a sad thing that people are even proud to work at companies like this where humans are not worth more than a big dick or boobs.
And then you get around and say that child porn is so horrible. No, all porn is horrible and destroys our families and integrity. How can there be any dignity left if these things are held to be something good?
We just create so much crap for consumption. Even public broadcasting is full of the 24-72hr news cycle. I've written code that I was employed for where I was like, why are we doing this. I assume journalists are the same.
While it's clear that alcohol is not particular healthy, I feel the risk is negligible compared to other common behaviours. I like alcohol very much and I'd like to make a informed decision about it. But i dearly miss the curial, end-user friendly information.
(Wikipedia, citing http://monographs.iarc.fr/ENG/Classification/Classifications...)
So what is it? Is moderate drinking helping? Or is it the lifestyle of moderation helping?
From the National Cancer Institute: "Can drinking red wine help prevent cancer? Researchers conducting studies using purified proteins, human cells, and laboratory animals have found that certain substances in red wine, such as resveratrol, have anticancer properties (16)."
Meanwhile, the same National Cancer Institute source writes that "[b]ased on extensive reviews of research studies, there is a strong scientific consensus of an association between alcohol drinking and several types of cancer (1, 2)."
Drinking causes cancer but red wine is known to have anticancer properties? Abstainers in one study have higher risk factors than moderate drinkers?
> She goes on, however, to knock back links suggesting that drinking may lower a person's risks of cardiovascular disease (CVD), noting that people who drink moderately also tend to have other lifestyle factors that lower their disease risk. Or, put another way, she noted that in a large US survey in 2005, 27 of 30 CVD risk factors were shown to be more prevalent in abstainers than moderate drinkers.
This is far from a new problem, and this particular piece is far from egregious, relatively speaking, considering how bad public science reporting is in general in the mass media.
John Oliver had fun with it recently:https://www.youtube.com/watch?v=0Rnq1NpHdmw
Here's one correlating ethanol usage with a lower risk of kidney cancer:http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3049576/
Here's a study linking ethanol consumption with a reduced risk of ALS:http://www.ncbi.nlm.nih.gov/pubmed/22791740
And it's not just one study linking ethanol to a lower risk of CVD-- it's several. The story is the same for all cause mortality. Acetaldehyde doesn't fully explain elevated risks of throat and mouth cancer, in my opinion... Acetaldehyde is a downstream metabolite, whereas the epithelium of the mouth and throat are tissues that ethanol is clearly coming in direct contact with. Again, this is just a crude hypothesis.
Sure ethanol is a toxin, but attempting to avoid it in an attempt to avoid toxins or carcinogens is a fantasy. Carcinogens are everywhere -- you breath them, eat them, ingest them, absorb them constantly. This is why low/moderate exposure to sunlight, alcohol, certain phytochemicals might actually be 'hormetic'.
I'm not saying that in an era of biotechnology and whole genome sequencing, ethanol consumption will be optimal. When we reach that point, we will most likely be consuming some kind of nutrient gel that contains everything the body needs. We will likely inhabit carcinogen free environments. Until that point, and I say this to all my fellow autistic nerds and hacker news readers, it's probably better to go have a drink or two with that cute girl who sits a few cubes down. If you want to extend life, invest/educate yourself on emerging biotechnologies. Otherwise you'll need to start worrying about the carcinogenic materials your electronics occasionally off-gas. Or the PCBs in your wild caught salmon. Or the benzaldehyde. Or the arsenic in your brown rice. Or the pesticide residues in your clothing. Or the ... nevermind.
The bottom line is, for the same amount of primary energy, a pure electrical car gets 2-3x the range of a hydrogen car. And of course, that directly translates into the costs of driving.
The article also gives some off number about electrical cars: they are not limited to 200 miles of driving - a Tesla does up to 300, and recharging at a supercharger station takes about 30-45 minutes. And, of course, in contrast to a hydrogen car, an electrical car can be recharged over night at home, so most trips do not require any recharging at all.
I had not considered the weight disadvantage of Li-on. Perhaps we'll have two technologies side by side: batteries for smaller, more centrally-located vehicles and hydrogen for larger and more remote ones.
The energy-intensity of enriching, packaging and distributing hydrogen looks less daunting in a duck-curve  characterised solar future. One could simply generate hydrogen when rates drop below a threshold.
This article definitely reads like a PR piece from an oil company (which is, after all, where the hydrogen comes from). The New York Times has a history of biased journalism against electric cars...
Fuel cells typically use a lot of highly-expensive catalyst, with full costs on the order of a million dollars each. Odds are good the sales price of these vehicles (just under $60k) represents a small fraction of the manufactuer's actual costs. This is a test pilot for real-world experience.
Hydrogen is the other problem, both in sourcing it and in distributing and dispensing it. Hydrogen is an energy carrier, not an energy source, but an energy carrier. You need some other source of energy to provide hydrogen, usually via hydrolysis, or from some hydrogen feedstock, usually natural gas.
In the case of hydrolysis, your problem is the energy cost of electrolysis, which costs you about 40% of the input energy. The bad news is you get much less hydrogen energy out than electrical energy in, the good news is that you can store the hydrogen, while electricity doesn't bank well.
In the case of natural gas, you've got the sitation that your vehicle is still ultimately consuming fossil fuels, though methane (CH4) emits far less CO2 than petrol (about a C8H18) -- roughly half as much. Since you're pre-processing the methane into hydrogen, there's the option of sequestering the carbon for other uses.
Hydrogen also has extensive problems with storage and handling -- it doesn't like to be contained, will literally leak out between the atoms of containers, embrittles metals, etc., etc.
Another alternative, one I'm interested in, though 50 years of serious research has yet to result in a working large-scale prototype, is Fischer-Tropsch fuel synthesis. Effectively it creates hydrocarbon fuels using electricity. The most promising model I'm aware of sources both hydrogen and carbon from seawater, hence seawater-based Fischer-Tropsch fuel synthesis. Penciling out the studies I'm aware of, it actually could scale up to current US and forseable global levels of production, without literally paving the world with solar panels and/or synthesis plants. E.g., not patentently impossible. The fact that the research hasn't proceeded further makes me question its ultimate practicality.
If, however, it is possible, then we end up with a fuel that is an exact chemical analog of existing hydrocarbon fuels, is infinitely miscable with them in the fuel processing, dispensing, and utilisation chain, and is carbon neutral.
And if you're generating hydrogen, you're already about 90% of the way to creating synthetic hydrocarbon fuels which avoid most or all of hydrogen's storage, transport, dispensing, handling, embrittlement, and energy conversion issues.
As I mentioned, I've looked into this in several posts, you might want to start with the historical overview here:
1. Technically, so are petroleum, coal, and natural gas. Though the source energy was supplied hundreds of millions of years ago, on average, in the form of sunlight converted to plant matter. Given this, at the rate of roughly 5 million years of ancient primary production per year of current consumption, you could make a reasonable argument that the fully realised solar energy cost (what's called "emergy") of petroleum is about 5 million per single unit of energy delivered. At the very least, we're spending 5 million years of accumulation per year. Something you might want to reflect on.
2. Brookhaven National Lab, M.I.T., and the US Naval Research Lab. Generally serious outfits.
3. For a counterexample, see schemes for biofuels. At best, present US fuel consumption would require plausibility-stretching levels of development, if not quite literally multiples of total US landmass. E.g., quite patently and evidently batshit impossible.
Tesla had a great 'well to wheel' efficiency analysis post on their blog a few years back that killed any notion of a hydrogen economy.
What I find more interesting is if we're a random fluctuation on a Turing machine tape. "Randomness" takes a lot of space to encode in a program, so smaller (more likely) programs lead to less random, more structured outputs. This agrees with our observations.
Recommended, it's a very interesting reading.
> A Boltzmann brain is a hypothesized self aware entity which arises due to random fluctuations out of a state of chaos.
This is really strange. Years ago I wrote a short story about a similar topic, without ever having read about the Boltzmann brain:
"History of Everything"
Happy Birthday, everyone!
One to one and group chats, group video and audio calls, GIF search built-in, doodles, the best implementation of photos in the message stream that I've seen, poking and playable Spotify and Soundcloud music by just sharing links? All with end-to-end encryption?
I have that "too good to be true" feeling but, still impressed. Just waiting for possible audits and more feedback from the security community.
Edit: It's also Switzerland based, already supports Win10, MacOS, Web, Android and iOS, and to complete has the cleanest design I've seen in a messaging app.
With some digging I've found a way to verify key fingerprints so that's nice, but it's manual, not QR assisted :(
At any rate, there are lots of us who can use the code with that license :-)
Another thing that I wonder about: Does being Swiss-based give them a privacy advantage?
Why do people copy the license all over the place like that?
At least Hangouts lets me use the app without a phone number.
1) desktop app
2) video call support
3) self-deleting messages
Signal finally (sort of) delivered a desktop app, but it still doesn't have the other two. Wire has the first two, but it's still lacking the last one. I hope one of them will have all three of these features soon.
If you are expecting to apply something immediately to your work then combinatorics and probability theory are way more relevant and practical for day to day programming work.
An abstract framework like category theory can actually be harmful where there isn't that much concrete detail to begin with. A personal example I faced was being overwhelmed by category theory jargon when starting to learn Haskell a couple of years back. My confidence returned only when I realized that there was just one category in play with types as the objects and functions as the arrows. The jargon was unnecessary. Today Haskellers discuss so many interesting issues about the language and its implementation which do not fit into the category theoretic framework at all.
All computer science is divided into Theory A (algorithms and complexity) and Theory B (logic and programming language design). Applications of category theory are part of Theory B. If you're a programmer who wants to have real world impact, you should study tons of Theory A and completely ignore Theory B.
That sounds inflammatory, but it is unfortunately 100% true. All of Theory B combined has less impact than a single hacker using Theory A to write Git or BitTorrent.
1.3 What is requested from the student The only way to learn mathematics is by doing exercises. One does not get fit by merely looking at a treadmill or become a chef by merely reading cookbooks, and one does not learn math by watching someone else do it. There are about 300 exercises in this book. Some of them have solutions in the text, others have solutions that can only be accessed by professors teaching the class. A good student can also make up his own exercises or simply play around with the material. This book often uses databases as an entry to category theory. If one wishes to explore categorical database software, FQL (functorial query language) is a great place to start. It may also be useful in solving some of the exercises.
Added in edit: To reply to both (currently) comments, you don't learn how to program by just reading and not doing. You can learn about programming, but you won't actually be able to program, and your knowledge will be superficial. If that's your objective, just to know about this topic, and others in math, then fine. If you want to be able to use your knowledge in any meaningful way, my comment stands. It's not enough just to read. You have to engage, and do the work.
art+com created a massive chalkbot at Jelling Museum in Denkmark:https://artcom.de/en/project/experience-centre-royal-jelling...
Now I'm wondering if these stepper motors might be a better alternative, maybe a third one against some sort of a spring loaded harness.
Either way very cool, would be interesting to see if a third motor might have helped rather than relying on gravity and good behavior of the chalk pen.
Still cool though!
I've wanted to do a drawbot (paper and pen) for a while, and thought these pulleys and belts might be good.
1: https://www.adafruit.com/products/12512: https://www.adafruit.com/products/1184
That reminds me of http://rampantgames.com/blog/?p=7745:
"In the main engineering room, there was a whoop and cry of success.
Our company financial controller and acting HR lady, Jen, came in to see what incredible things the engineers and artists had come up with. Everyone was staring at a television set hooked up to a development box for the Sony Playstation. There, on the screen, against a single-color background, was a black triangle.
Its a black triangle, she said in an amused but sarcastic voice. One of the engine programmers tried to explain, but she shook her head and went back to her office. I could almost hear her thoughts Weve got ten months to deliver two games to Sony, and they are cheering over a black triangle? THAT took them nearly a month to develop"
The difference literally comes down to whether you are doing easy things or not. Having an engine or framework does help make a variety of things easy, but it does nothing for the one or two features that aren't. Eventually you hit a wall where it takes forever, and that's your next month. You get over the wall and then a flood of other new features come in almost instantly. Also in the same ballpark are features that you have coded before and are familiar with, vs. ones you aren't. You can get a lot done "from scratch" by spamming preexisting knowledge at the problem, but it still takes time and it isn't exactly easy either.
Last of all, at first clone-and-modify is enough to feel interesting. So you go very quickly, because you care little about the result. But after a few dozen times doing that, you're done, and you want to expand the parts you care about. That creates more barriers to get over, more months where progress is slow because your ambition is big enough to no longer follow the easy path. More months where problems are on the content development side, not the runtime. That part is always difficult. Scope is deceptive.
It's lies, all lies :) actually it's probably solid advice - the hard bit turns out to be following it.
The missing piece of advice turned out to be - "stay disciplined" ...
> Don't build an engine instead of a game
Only too true... That's a trap I always fall into. I guess programmers are so hard-wired to abstracting problems they that need solving they end up doing a lot of abstracting and very little solving. (Compulsory xkcd: https://www.xkcd.com/974/)
Haha. About a year ago I went through 6 hours of updates to get a python script to create mouse and keyboard inputs (moving the mousing and typing to interact with gui apps). After that I called it a day and have barely touched the project since.
This is a great technique, but takes some discipline. Reminds me of GTD.
Back then there weren't engines like Unity and Unreal that solve most of the technical problems like rendering, and physics/collision detection. Even with these tools though, I still find it difficult to finish a project quickly because I spend so much time writing infrastructure code. For that reason I never do hackathons.
The reason is simple, I believe? Nobody knows how to use it beyond myself.
Even if that isn't the case, I still need to work out a tutorial anyway.
I did and it was my first game. The sentiment definitely holds true for most though.
Here's a basic tutorial with comments for the Vulkan C API. Vulkan is a very low level API, so there's a lot of code. It should be straightforward to port the C tutorial to use the C++ API.
I'm not very knowledge regarding Vulkan so hopefully that isn't a stupid question but I want to brush up on my C++ skills and play with this!
Also how similar is Vulkan to SDL? I used to use SDL quite a bit back in the day and it was awesome but I'm assuming Vulkan is far more comprehensive?
My wife is a speech therapist and uses a system that is designed to help people who have had strokes regain their voice.
It comprises a piece of software that comes with a "specially calibrated USB microphone". The microphone is actually a Samson laptop USB mic that had the voice improvement systems logo stuck on it.
The system came with lots of legal warnings about not copying, not telling unqualified people about how it worked and not to use an unapproved microphone. The DMCA was specifically mentioned.
One day the mic failed (the program requires patients to shout aggressively at the mic) so my wife went off looking for a replacement. We had a few USB mics that we tried and and the application refused to acknowledge their existence even though they showed up in Windows. It became obvious that the software was checking the USB device ID. My wife went to the company that ran the system to get a replacement, but they said she had to buy a new copy of the software as well - total cost $659. So we took a chance and ordered a new Samson USB mic from Amazon for 30.00, but when it arrived it didn't work. It was the same model, but was a few generations ahead and therefore had a different USB device ID. My wife has some colleagues with the same package so I tested their mics and they had different USB device IDs and it became obvious that when Samson released a revision of the mic the company offering the system simply recompiled the code with he new device ID baked in and then re-branded the mic.
So, not wanting to shell out $659 for a whole new package I took the old and new mics apart desoldered the cartridges from both mics and put the new one in the body of the failed mic. It worked! Now technically this would be a violation of 1201 in the sense that the individual copy of the software they sold you was tied to the specific mic they sold you at the same time - they said so in the EULA. But lets be honest that's just nonsense. They were simply trying to sell more stuff - a tactic that seems fairly common in various fields of professional therapy.
This is the sort of problem caused by 1201. If we lived in the US we would have been in breach of the DMCA even though we copied nothing.
Also, the software is as ugly as sin.
More seriously, the GPLv3 contains an interesting provision. Search for "Anti-Circumvention" in this to find the section: https://www.gnu.org/licenses/gpl-3.0-standalone.html
The second paragraph is probably enforceable, but I'd be interested to hear from someone suitably informed whether the first paragraph has any basis. How far can it be taken?
For example, one of the most insidious things about the Blu-ray format is that unlike DVD and HD-DVD, commercially pressed video Blu-rays are obliged to use AACS. Theoretically non-AACS discs could be pressed and work, but the replication plants aren't _allowed_ to print non-AACS video Blu-rays. This has caused some consternation where people want to distribute Creative Commons/etc. video on optical media, more than can fit on a DVD. I think I recall Archive Team talking about just having to resort to putting video files on a data Blu-ray instead.
If someone made a film, put "Neither this work nor any derived work can constitute an effective technological measure for the purposes of the WIPO copyright treaty or any corresponding legislation" in the credits, and then someone else got AACS'd Blu-rays made of it, would 1201 thereby not prohibit breaking AACS specifically in the context of that Blu-ray? It seems rather dubious.
What's kind of cool about this issue is that it attracts support from citizens of all political stripes - whether you're a farmer who just wants to be able to fix his own damn tractor, or a hacker who wants to futz with proprietary hardware, the law is patently bogus.
Unfortunately, farmers and hackers have far less political influence than corporations. Hopefully by pursuing this through the courts and with adequate resources from the EFF some progress can be made that couldn't be in congress.
> EFF is representing plaintiff Andrew bunnie Huang, a prominent computer scientist and inventor, and his company Alphamax LLC, where he is developing devices for editing digital video streams. Those products would enable people to make innovative uses of their paid video content, such as captioning a presidential debate with a running Twitter comment field or enabling remixes of high-definition video. But using or offering this technology could run afoul of Section 1201.
It definitely should be legal to build those products. Maybe it should be legal to distribute that captioned video as fair use. But why should Twitter profit from a user captioning a video CNN created?
That's the part I have trouble with here. Fair use is fine and good, but there is a large universe of very profitable companies that don't make content of their own, but profit from other peoples' content. Of course they have a huge interest in weakening copyright protections under the guise of promoting fair use.
IP is completely flawed because it grants a monopoly on the fruits of specific knowledge or a work as if they are static end products, whereas in reality anything that is not evolving is dying. So the law restricts progress to the owners of the IP even when we could all contribute. And when there is incompetence or negligence by the owners, we have a situation where something good is ruined or withheld, with anyone fixing it being illegal.
Removing IP is impossible because it's about profit, which is also a right. What we need is a new revenue system based on new principles of an expectation of progress and open contribution. Open source software and hardware is this, but just without any standard profit model backed by law.
Today's kids have been well trained by Apple, Google and Netflix and hardly even understand what we are talking about.
"Oh, you don't have an iPhone anymore? Just buy it on Google Play and you will have it again on your Galaxy." is a quote I have heard more than once...
I'm not an American and do not live in America but the problems with American (copyright) laws unfortunately affect the world on a global scale. I sincerely wish you all the best in your efforts and hope that other organisations as well as the (fantastic) EFF back you.
I stand behind you.
Section 1201 contains the anti-circumvention and anti-trafficking provisions. These infringe upon fair use activities like format conversion, repairs, and security research.
Now my attempts so far are stymied by this weird half world. Most government contracts basically want either bums on seats contractors or to fundamentally hire "someone who has done it before" (effectively the same as wanting to buy off the shelf)
So there is almost no way to seed fund the initial OSS development.
Down thread people talk about a fund for starting OSS projects to provide things like this. Plover is an example of people trying it on their own - but a funded system that basically follows current gov work seems better.
That's a valid point but I don't see how it's gotten to 1000 points, so I think I'm missing something. What's the lawsuit? What's the egregious use case?
And it's such a shame, too, since those laws were bought and paid for by lobbyists, and what does it say about the rest of the country if one can't expect to get what one pays for when lobbying at the highest level of government?
Before we get into socioeconomic barrier discussions I am a former disabled homeless person who is how the founder of one of the most powerful environmental activism groups in the country. I started out with nothing and worked myself to where I am, using original and creative thought and at no time have I ever needed anyones intellectual property to build myself to where I am.
The Electronic Frontier Foundation, that supports this complete bullshit erosion of the rights of content creators everywhere, does nothing in this world but fight for causes that continually reduce the market value of original ideas.
They claim to fight for things like free speech but what they really fight for is the rights of anonymous hate groups to steal your photos and write nasty messages on them. They fight for the rights of the meek to inherit the Earth so they can then destroy it with their abject failures.
Look to the recent lawsuit Google v Oracle, where Oracle sued Google over the use of their software in Android. Google avoided billions in liability and it was all thanks to the work of the EFF, who suck off the teat of Silicon Valley and protect their billionaire buddies from financial liability, and then they support little guys like this so they can continue their 1% supporting ruse.
I look forward to watching this mad grab at free intellectual property get slapped down by Washington DC. This is not about fighting the government, this guy is a puppet being used by the power that be in Silicon Valley in order to allow companies like Google to continue to rob, loot, and pillage other peoples intellectual property without financial liability.
Not saying you did it, but I had to comment someplace.
However, I feel like there is something very very wrong about method and intention of this type of actions/complains.
One thing always bugs me about Americans: despite the liberties that they enjoy, despite the very real capacity to impact change in their government and laws, they all hate "the Government." Who is "the Government"? Wait, ain't them the very candidates that you the people vote into offices?
Like this idea of "suing the US government." Who are you suing? The executive branch? Why are you suing them? This is over a law. It's a piece of legislation. The executive branch merely, you know, execute the laws. Why not sue Congress? Oh wait, why sue Congress when you can simply vote them out of office? Oh wait, why "stop enforcing" the laws when you can, you know, CHANGE the laws?
This kinda reminds me of the libertarians' ideas of obstruction of legislation so that "the government does not spend more." If not spending is the right thing to do, why not educate people that. Even if one believes that 47% of the population is "takers," 53% is still a majority. So teach, advocate, change minds. But no, they prefer to obstruct their country, risk the centuries of their national reputation, put t heir fellow citizens to starvation. You know, if this happens in schoolyards, we probably call it "bullying." But if a bunch of libertarians do it, it's "principles."
Obviously, I agree with the plaintiff here. However, the method is still wrong. And different from above, there are very few "takers" here. Mostly, it's faceless businesses that (let's be frank here) few people like. So why not take the high road? Why not educate your fellow citizens on the danger of the laws? Why not change minds? Why not raise money for candidates who will change the laws appropriately?
In short: why not be a citizen rather than a rebel? Why not change the system for the better rather than obstruct it? Why not make your society/country a better place rather than simply fight it?
So unless Bunnie has been prosecuted for breaking the DMCA, this is likely going to be an ineffective move.
If you want to change a law without breaking it first, the right way to go about it is petitioning Congress, the lawmaking part of the government.
>Before Section 1201, the ownership of ideas was tempered by constitutional protections. Under this law, we had the right to tinker with gadgets that we bought, we had the right to record TV shows on our VCRs, and we had the right to remix songs.
Wait, before the DMCA "we" had the right to remix songs? Okay so this case is going nowhere because the person filing really doesn't quite understand the mechanics of basic Copyright. Just kind of throwing out the concept of "remixes" does a dis-service for the real nuances of how the rights/permissions/compensation system works, has been tested in court, etc.
The subject of ownership and repair is extremely complex and this lawsuit is frivolous when the matter is being actively tested by John Deere and various farmers. Maybe this person could assist in funding that challenge to 1201. There are some glaring flaws in this whole approach, from what I understand about Copyright law and the DMCA.
Also, I don't know why the EFF continues to push erroneous information regarding how Copyright, the DMCA, and Fair Use actually work:
>This ban applies even where people want to make noninfringing fair uses of the materials they are accessing.
Fair Use always trumps the DMCA; the nature of Fair Use, however, is subject to four factor tests, if an IP owner should feel compelled to assert the Fair Use was not in the spirit and letter of the law. Sometimes it seems like the EFF and TechDirt try to claim things that aren't true just to make a point. It's something that bothers me routinely in this subject in particular.
The term originated with (or is strongly associated with) the Westinghouse railroad brake system. These are the pressurised air brakes on trains, in which air pressure holds the brake shoes open against spring pressure. Should integrity of the brakeline be lost, the brakes will fail in the activated position, slowing and stopping the train (or keeping a stopped train stopped).
Fail-safe designs and practices can lead to some counterintuitive concepts. Aircraft landing on carrier decks, in which they are arrested by cables, apply full engine power and afterburner on landing. The idea is that should the arresting cable or hook fail, the aircraft can safely take off again.
Upshot: "fail safe" doesn't mean "test all your failure conditions exhaustively". It may well mean to abort on any failure mode (see djb's software for examples). The most important criterion is that whatever the failure mode be, it be as safe as possible, and almost always, based on a very simple and robust design, mechanism, logic, or system.
From the description of this project, it strikes me that it may well be failing (unsafely?) to implement these concepts. Charles Perrow, scholar of accidents and risks, notes that it's often safety and monitoring systems themselves which play a key role in accidents and failures.
Personally, I'd rather systems fail quickly, with retries only at the highest (application) and lowest (TCP) levels.
From a risk management perspective it's never a good idea to separate liability from control. If the banks don't provide adequate security controls to their customers why should their customers be liable?
Even though the controls that a customer has are less than what Gmail provides the banks continue to push the the illusion that the customer is actually in control. Even the vocabulary they use implies the customer was always in control.
For example you were a victim of identity theft. How crazy is that? How can somebody steal my identity? Oh, I woke up this morning and I wasn't me!!!
Nope.. Checked my identity and I'm still me. Why did you allow somebody to steal all my money?
For my ordinary checking account, I opt for convenience. I don't want transactions randomly declined and I don't want to have to wait for banking hours to authorise activity. To compensate, I limit the amount I keep in the account.
For certain other accounts, I opt for more security. Cheques are blocked; foreign transactions are, by default, blocked; online banking must be two-factor authenticated every time; transfers must be authorised with a phone call below certain amounts and in person at a branch, with ID and a passphrase verified, above certain amounts; et cetera. These are flags one can have enabled on most bank accounts. They're just debilitatingly irritating for ordinary use.
If you make banks responsible for user-authorised fraud, e.g. a customer wiring money to a scammer, you're also asking them to nanny you. Freedom and protection from your own stupidity exist on a spectrum.
What am I missing?
More importantly, the customer referenced in the article basically wired all the funds in her account to a scammer then asked the bank for it back. Sorry, but that is the customer's fault, not the bank's fault.
1. Someone thought chmod 777 was a good idea, ever, under any circumstances. Not only is this standard practice in Magento installs (it's a how-to step in many books on Magento), it's all through the actual codebase.
The below is from the Magento Enterprise 184.108.40.206 tarball, downloaded from the company (and I double-checked this after someone questioned this last time I brought this up):
$ grep -r chmod .|grep 777 ./downloader/lib/Mage/Backup/Filesystem.php: chmod($backupsDir, 0777); ./app/code/core/Mage/Compiler/Model/Process.php: @chmod($dir, 0777); ./app/code/core/Mage/Install/Model/Installer/Console.php: @chmod('var/cache', 0777); ./app/code/core/Mage/Install/Model/Installer/Console.php: @chmod('var/session', 0777); ./app/code/core/Mage/Install/Model/Installer/Config.php: chmod($this->_localConfigFile, 0777); ./app/code/core/Mage/Catalog/Model/Product/Attribute/Backend/Media.php: $ioAdapter->chmod($this->_getConfig()->getTmpMediaPath($fileName), 0777); ./app/Mage.php: chmod($logDir, 0777); ./app/Mage.php: chmod($logFile, 0777); ./lib/Zend/Service/WindowsAzure/CommandLine/PackageScaffolder/PackageScaffolderAbstract.php: @chmod($path, '0777'); ./lib/Zend/Service/WindowsAzure/CommandLine/PackageScaffolder/PackageScaffolderAbstract.php: @chmod($path, 0777); ./lib/Zend/Service/WindowsAzure/CommandLine/PackageScaffolder/PackageScaffolderAbstract.php: @chmod($path, 0777); ./lib/Zend/Cloud/StorageService/Adapter/FileSystem.php: chmod($path, 0777); ./lib/Varien/Autoload.php: @chmod($this->_collectPath, 0777); ./lib/Varien/File/Uploader.php: chmod($destinationFile, 0777); ./lib/Mage/Backup/Filesystem.php: chmod($backupsDir, 0777); ./errors/processor.php: @chmod($this->_reportFile, 0777);
The way we eventually dealt with hosting Magento (which we had strongly advised against) was a concrete sarcophagus and a thirty-kilometre exclusion zone:
* a cron line specifically to remove o-w permissions from all files in the webroot every minute (which is very inelegant, but the alternative is maintaining our own patches to core).
* Files not owned www-data, except where Magento must be able to write to them.
* deploy all webroot files as a user the webserver can't write.
* cron.sh (Magento's internal cron) runs as root out of the box. We ran it as www-data.
* AppArmor to keep Magento from ever, ever being able to pull shit. This caught Magento's more antisocial tendencies on more than one occasion.
* Admin login: use a path other than "/admin" to foil quite a lot of attack bots at the very simplest level.
We have outsourced our remaining Magento, thankfully, and I don't personally have to maintain the above any more. (You know you've been administering Magento a bit long when you can hum along to bits of "Metal Machine Music" accurately.)
The use case for Magento is (apparently deliberately) confused. It's an unholy melange of a CMS and a shopping basket. There is no good out-of-the-box experience; in practice it's a job creation scheme for consultants.
Even crap-tier "well technically I can tell my boss's boss we have paid support" support, with a four-day response time for them to ask you a simple question you already put the answer to in the original ticket, is swingeingly expensive. I can't say what we're paying for this standard of quality, but I can say that it's public knowledge that Magento is at least $13k/yr: http://web.archive.org/web/20120215011525/http://www.magento...
The problem Magento seems to solve is when the business wants a quick site without developer involvement. After a few other abortive platforms (Plone, Drupal - which are both fine for what they are, in ways Magento just isn't, but didn't end up matching our needs), our eventual solution to this was Wordpress, which we have outsourced so I don't have to think about that either. Outsourced Wordpress with securing it being the host's problem is totally the right answer.
I don't have a good answer on the shopping basket, but Magento was bad enough at that too that we went back to our in-house homerolled system.
I understand some work has gone into Magento 2.0 to make it less mind-bogglingly horrible.
Nope. There is a built in credit card method that would utilise the Magento crypt() but you cannot actually hook this up to a bank and get any actual money. The lamest of lamest Magento developers will use an off the shelf payment gateway that will use things like iframes so that even a non https Magento site will not have any credit card information pass through Magento, instead you get payment references.
In theory you could break the crypto and get into Magento admin, to then export out customer email and address details. You could probably refund all customers orders but not get fresh money out of them.
I appreciate the code may be 2008 vintage but there is no new vulnerability here that gives any means to access any Magento credit card data in a meaningful way, e.g. a lucrative way.