I think a lot of the issue stems from layman explanations of neural networks. Pretty much every time DL is covered by media, there has to be some contrived comparison to human brains; these descriptions frequently extend to DL tutorials as well. It's important for that idea to be dispelled when people actually start applying deep models. The model's intuition doesn't work like a human's, and that can often lead to unsatisfying conclusions (e.g. the panda --> gibbon example that Francois presents).
Unrelatedly, if people were more cautious about anthropomorphization, we'd probably have to deal a lot less with the irresponsible AI fearmongering that seems to dominate public opinion of the field. (I'm not trying to undermine the danger of AI models here, I just take issue with how most of the populace views the field.)
"To lift some of these limitations and start competing with human brains, we need to move away from straightforward input-to-output mappings, and on to reasoning and abstraction."
There are tens of thousands of scientists and researchers who are studying the brain from every level and we are making tiny dents into understanding it. We have no idea what the key ingredient is , nor if it is 1 or many ingredients that will take us to the next level. Look at deep learning, we had the techniques for it since the 70's, yet it is only now that we can start to exploit it. Some people think the next thing is the connectome, time, forgetting neurons, oscillations, number counting, embodied cognition,emotions,etc. No one really knows and it is very hard to test, the only "smart beings" we know of are ourselves and we can't really do experiments on humans because of laws and ethical reasons. Computer Scientists like many of us here like to theorize on how AI could work, but very little of it is tested out. I wish we had a faster way to test out more competing theories and models.
Example: learning to ride a bike. You have no idea how you do it. You can't explain it in words. It requires tons of trial and error. You can give a bike to a physicist that has a perfect deep understanding of the laws of physics. And they won't be any better at riding than a kid.
And after you learn to ride, change the bike. Take one where the handle is inversed. And turning it right turns the wheel left. No matter how good you are at riding a normal bike, no matter how easy it seems it should be, it's very hard. Requires relearning how to ride basically from scratch. And when you are done, you will even have trouble going back to a normal bike. This sounds familiar to the problems of deep reinforcement learning, right?
If you use only the parts of the brain you use to ride a bike, would you be able to do any of the tasks described in the article? E.g. learn to guide spacecraft trajectories with little training, through purely analog controls and muscle memory? Can you even sort a list in your head without the use of pencil and paper?
Similarly recognizing a toothbrush as a baseball bat isn't as bizarre as you think. Most NNs get one pass over an image. Imagine you were flashed that image for just a millisecond. And given no time to process it. No time to even scan it with your eyes! You certain you wouldn't make any mistakes?
But we can augment NNs with attention, with feedback to lower layers from higher layers, and other tricks that might make them more like human vision. It's just very expensive.
And that's another limitation. Our largest networks are incredibly tiny compared to the human brain. It's amazing they can do anything at all. It's unrealistic to expect them to be flawless.
There is a video here https://www.youtube.com/watch?v=hUnRCxnydCc
I think this has some better examples than the Panda vs Gibbon example in the OP if you want to 'see' why a model may classify a tree-frog as a tree-frog vs a billiard (for example). IMO this suggests some level of anthropomorphizing is useful for understanding and building models as the pixels the model picks up aren't really too dissimilar to what I imagine a naive, simple, mind might use. (i.e the tree-frog's goofy face) We like to look at faces for lots of reasons but one of them probably is because they're usually more distinct which is the same, rough, reason why the model likes the face. This is interesting (to me at least) even if it's just matrix multiplication (or uncrumpling high dimensional manifolds) underneath the hood,
The article notes, "Machine learning models have no access to such experiences and thus cannot "understand" their inputs in any human-relatable way". But this ignores a lot of the subtlety in psychological models of human consciousness. In particular, I'm thinking of Dual Process Theory as typified by Kahneman's "System 1" and "System 2". System 1 is described as a tireless but largely unconscious and heavily biased pattern recognizer - subject to strange fallacies and working on heuristics and cribs, it reacts to it's environment when it believes that it recognizes stimuli, and notifies the more conscious "System 2" when it doesn't.
At the very least it seems like neural networks have a lot in common with Kahneman's "System 1".
While pattern matching can be applied to model the process of cognition, DL cannot really model abstractive intelligence on its own (unless we phrase it as a pattern learning problem, viz. transfer learning, on a very specific abstraction task), and much less can it model consciousness.
Yes we have engineered better NN implementations and have more compute power, and thus can solve a broader set of engineering problems with this tool, but is that it?
Per my understanding - Each vector space represents the full state of that layer. Which is probably why the transformations work for such vector spaces.
A sorting algorithm unfortunately cannot be modeled as a set of vector spaces each representing the full state. For instance, an intermediary state of a quick sort algorithm does not represent the full state. Even if a human was to look at that intermediary step in isolation, they will have no clue as to what that state represents. On the contrary, if you observe the visualized activations of an intermediate layer in VGG , you can understand that the layer represents some elements of an image.
There are many professions where there is very little data available to learn from. In some case (self-driving), companies will invest large amount of money to build this data, by running lots of test self-driving cars, or paying people to create the data, and it is viable given the size of the market behind. But the typical high-value intellectual profession is often a niche market with a handful of specialists in the world. Think of a trader of financial institutions bonds, or a lawyer specialized in cross-border mining acquisitions, a physician specialist of a rare disease or a salesperson for aviation parts. What data are you going to train your algorithm with?
The second objection, probably equally important, also applies to "software will replace [insert your boring repetitive mindless profession here]", even after 30 years of broad adoption of computers. If you decide to automate some repetitive mundane tasks, you can spare the salary of the guys who did these tasks, but now you need to pay the salary of a full team of AI specialists / software developers. Now for many tasks (CAD, accounting, mailings, etc), the market is big enough to justify a software company making this investment. But there is a huge number of professions where you are never going to break even, and where humans are still paid to do stupid tasks that a software could easily do today (even in VBA), and will keep doing so until the cost of developing and maintaining software or AI has dropped to zero.
I don't see that happening in my life. In fact I am not even sure we are training that many more computer science specialists than 10 years ago. Again, didn't happen with software for very basic things, why would it happen with AI for more complicated things.
Search for things like "Towards Deep Developmental Learning" or "Overcoming catastrophic forgetting in neural networks" or "Feynman Universal Dynamical" or "Wang Emotional NARS". No one seems to have put together everything or totally solved all of the problems but there are lots of exciting developments in the direction of animal/human-like intelligence, with advanced NNs seeming to be an important part (although not necessarily in their most common form, or the only possible approach).
Well maybe we should train systems with all our sensory inputs first, like newborns leans about the world. Then make these models available open source like we release operating systems so others can build on top of that.
For example we have ImageNet, but we don't have WalkNet, TasteNet, TouchNet, SmellNet, HearNet... or other extremely detailed sensory data recorded for an extended time. And these should be connected to match the experiences. At least I have no idea they are out there :)
While the design process of deep networks remains founded in trial and error, and there are no convergence theorems and approximation guarantees, no one can be sure what deep learning can do, and what it could never do.
"To lift some of these limitations and start competing with human brains, we need to move away from straightforward input-to-output mappings, and on to reasoning and abstraction. A likely appropriate substrate for abstract modeling of various situations and concepts is that of computer programs. We have said before (Note: in Deep Learning with Python) that machine learning models could be defined as "learnable programs"; currently we can only learn programs that belong to a very narrow and specific subset of all possible programs. But what if we could learn any program, in a modular and reusable way? Let's see in the next post what the road ahead may look like."
I was wondering how a NN would go about discovering F = ma and the laws of motion. As far as I can tell, it has a lot of similarities to how humans would do it. You'd roll balls down slopes like in high school and get a lot of data. And from that you'd find there's a straight line model in there if you do some simple transformations.
But how would you come to hypothesise about what factors matter, and what factors don't? And what about new models of behaviour that weren't in your original set? How would the experimental setup come about in the first place? It doesn't seem likely that people reason simply by jumbling up some models (it's a line / it's inverse distance squared / only mass matters / it matters what color it is / etc), but that may just be education getting in my way.
A machine could of course test these hypotheses, but they'd have to be generated from somewhere, and I suspect there's at least a hint of something aesthetic about it. For instance you have some friction in your ball/slope experiment. The machine finds the model that contains the friction, so it's right in some sense. But the lesson we were trying to learn was a much simpler behaviour, where deviation was something that could be ignored until further study focussed on it.
This statement has a few problems - there is no real reason to interpret the transforms as geometric (they are fundamentally just processing a bunch of numbers into other numbers, in what sense is this geometric), and the focus on human-annotated data is not quite right (Deep RL and other things such as representation learning have also achieved impressive results in Deep Learning). More importantly, saying " a deep learning model is "just" a chain of simple, continuous geometric transformations " is pretty misleading; things like the Neural Turing Machine have shown that enough composed simple functions can do pretty surprisingly complex stuff. It's good to point out that most of deep learning is just fancy input->output mappings, but I feel like this post somewhat overstates the limitations.
I'm really looking forward to this. If it comes out looking like something faster and more usable than Bayesian program induction, RNNs, neural Turing Machines, or Solomonoff Induction, we'll have something really revolutionary on our hands!
He's on the right track. Of course, the general thrust goes beyond deep learning. The projection of intelligence onto computers is first and foremost wrong because computers are not able, not even in principle, to engage in abstraction, and claims to the contrary make for notoriously bad, reductionistic philosophy. Ultimately, such claims underestimate what it takes to understand and apprehend reality and overestimate what a desiccated, reductionistic account of mind and the broader world could actually accommodate vis-a-vis the apprehension and intelligibility of the world.
Take your apprehension of the concept "horse". The concept is not a concrete thing in the world. We have concrete instances of things int he world that "embody" the concept, but "horse" is not itself concrete. It is abstract and irreducible. Furthermore, because it is a concept, it has meaning. Computers are devoid of semantics. They are, as Searle has said ad nauseam, purely syntactic machines. Indeed, I'd take that further and say that actual, physical computers (as opposed to abstract, formal constructions like Turing machines) aren't even syntactic machines. They do not even truly compute. They simulate computation.
That being said, computers are a magnificent invention. The ability to simulate computation over formalisms -- which themselves are products of human beings who first formed abstract concepts on which those formalisms are based -- is fantastic. But it is pure science fiction to project intelligence onto them. If deep learning and AI broadly prove anything, it is that in the narrow applications where AI performs spectacularly, it is possible to substitute what amounts to a mechanical process for human intelligence.
Here's how I've been explaining this to non-technical people lately:
"We do not have intelligent machines that can reason. They don't exist yet. What we have today is machines that can learn to recognize patterns at higher levels of abstraction. For example, for imagine recognition, we have machines that can learn to recognize patterns at the level of pixels as well as at the level of textures, shapes, and objects."
If anyone has a better way of explaining deep learning to non-technical people in a few short sentences, I'd love to see it. Post it here!
If you look at the example with the blue dots on the bottom, would it not just take many more blue dots to fill in what the neural network doesn't know? I understand that adding more blue dots isn't easy - we'll need a huge amount of training data, and huge amounts of compute to follow; but if increasing the scale is what got these to work in the first place, I don't see we shouldn't try to scale it up even more.
The Computational Cognitive Neuroscience Lab has been studying this topic for decades and has an online textbook here:
The "emergent" deep learning simulator is focused on using these kinds of models to model the brain:
See also, if you can, the film "Being in the world", which features Dreyfus.
> This is because a deep learning model is "just" a chain of simple, continuous geometric transformations mapping one vector space into another. All it can do is map one data manifold X into another manifold Y, assuming the existence of a learnable continuous transform from X to Y, and the availability of a dense sampling of X:Y to use as training data. So even though a deep learning model can be interpreted as a kind of program, inversely most programs cannot be expressed as deep learning models [why?]for most tasks, either there exists no corresponding practically-sized deep neural network that solves the task [why?], or even if there exists one, it may not be learnable, i.e. the corresponding geometric transform may be far too complex [???], or there may not be appropriate data available to learn it [like what?].
> Scaling up current deep learning techniques by stacking more layers and using more training data can only superficially palliate some of these issues [why?]. It will not solve the more fundamental problem that deep learning models are very limited in what they can represent, and that most of the programs that one may wish to learn cannot be expressed as a continuous geometric morphing of a data manifold. [really? why?]
I tend to disagree with these opinions, but I think the authors opinions aren't unreasonable, I just wish he would explain them rather than re-iterating them.
So, right, current approaches to "machinelearning* as in the OP have some serious"limitations". But this point is a small,tiny special case of something else muchlarger and more important: Currentapproaches to "machine learning" as in theOP are essentially some applied math, andapplied math is commonly much morepowerful than machine learning as in theOP and has much less severe limitations.
Really, "machine learning" as in the OP isnot learning in any significantlymeaningful sense at all. Really,apparently, the whole field of "machinelearning" is heavily just hype from thedeceptive label "machine learning". Thathype is deceptive, apparently deliberatelyso, and unprofessional.
Broadly machine learning as in the OP is acase of old empirical curve fitting wherethere is a long history with a lot ofapproaches quite different from what is inthe OP. Some of the approaches are undersome circumstances much more powerful thanwhat is in the OP.
The attention to machine learning isomitting a huge body of highly polishedknowledge usually much more powerful. Ina cooking analogy, you are being sold astate fair corn dog, which can be good,instead of everything in Escoffier,
Prosper Montagn, Larousse Gastronomique:The Encyclopedia of Food, Wine, andCookery, ISBN 0-517-503336, CrownPublishers, New York, 1961.
Essentially, for machine learning as inthe OP, if (A) have a LOT of trainingdata, (B) a lot of testing data, (C) bygradient descent or whatever build amodel of some kind that fits thetraining data, and (D) the model alsopredicts well on the testing data, then(E) may have found something of value.
But the test in (D) is about the onlyassurance of any value. And the value in(D) needs an assumption: Applications ofthe model will in some suitable sense,rarely made clear, be close to thetraining data.
Such fitting goes back at least to
Leo Breiman, Jerome H. Friedman, RichardA. Olshen, Charles J. Stone,Classification and Regression Trees,ISBN 0-534-98054-6, Wadsworth &Brooks/Cole, Pacific Grove, California,1984.
not nearly new. This work is commonlycalled CART, and there has long beencorresponding software.
And CART goes back to versions ofregression analysis that go back maybe 100years.
So, sure, in regression analysis, we aregiven points on an X-Y coordinate systemand want to fit a straight line so that asa function of points on the X axis theline does well approximating the points onthe X-Y plot. Being more specific coulduse some mathematical notation awkward forsimple typing and, really, likely notneeded here.
Well, to generalize, the X axis can haveseveral dimensions, that is, accommodateseveral variables. The result ismultiple linear regression.
For more, there is a lot with a lot ofguarantees. Can find those in short andeasy form in
Alexander M. Mood, Franklin A. Graybill,and Duane C. Boas, Introduction to theTheory of Statistics, Third Edition,McGraw-Hill, New York, 1974.
with more detail but still easy form in
N. R. Draper and H. Smith, AppliedRegression Analysis, John Wiley and Sons,New York, 1968.
with much more detail and carefully donein
C. Radhakrishna Rao, Linear StatisticalInference and Its Applications: SecondEdition, ISBN 0-471-70823-2, John Wileyand Sons, New York, 1967.
Right, this stuff is not nearly new.
So, with some assumptions, get lots ofguarantees on the accuracy of the fittedmodel.
This is all old stuff.
The work in machine learning has addedsome details to the old issue of overfitting, but, really, the math in oldregression takes that into consideration-- a case of over fitting will usuallyshow up in larger estimates for errors.
There is also spline fitting, fitting fromFourier analysis, autoregressiveintegrated moving average processes,
David R. Brillinger, Time SeriesAnalysis: Data Analysis and Theory,Expanded Edition, ISBN 0-8162-1150-7,Holden-Day, San Francisco, 1981.
and much more.
But, let's see some examples of appliedmath that totally knocks the socks offmodel fitting:
(1) Early in civilization, people noticedthe stars and the ones that moved incomplicated paths, the planets. WellPtolemy built some empirical models basedon epi-cycles that seemed to fit thedata well and have good predictive value.
But much better work was from Kepler whodiscovered that, really, if assume thatthe sun stays still and the earth movesaround the sun, then the paths of planetsare just ellipses.
Next Newton invented the second law ofmotion, the law of gravity, and calculusand used them to explain the ellipses.
So, what Kepler and Newton did was farahead of what Ptolemy did.
Or, all Ptolemy did was just someempirical fitting, and Kepler and Newtonexplained what was really going on and, inparticular, came up with much betterpredictive models.
Empirical fitting lost out badly.
Note that once Kepler assumed that the sunstands still and the earth moves aroundthe sun, actually he didn't need much datato determine the ellipses. And Newtonneeded nearly no data at all except tocheck is results.
Or, Kepler and Newton had some good ideas,and Ptolemy had only empirical fitting.
(2) The history of physical science isjust awash in models derived fromscientific principles that are, then,verified by fits to data.
E.g., some first principles derivationsshows what the acoustic power spectrum ofthe 3 K background radiation should be,and the fit to the actual data from WMAP,etc. was astoundingly close.
News Flash: Commonly some real science oreven just real engineering principlestotally knocks the socks off empiricalfitting, for much less data.
(3) E.g., here is a fun example I workedup while in a part time job in gradschool: I got some useful predictions foran enormously complicated situation out ofa little applied math and nearly no dataat all.
I was asked to predict what thesurvivability of the US SSBN fleet wouldbe under a special scenario of globalnuclear war limited to sea.
Well, there was a WWII analysis by B.Koopman that showed that in search, say,of a submarine for a surface ship, anairplane for a submarine, etc. theencounter rates were approximately aPoisson process.
So, for all the forces in that war at sea,for the number of forces surviving, withsome simplifying assumptions, we have acontinuous time, discrete state spaceMarkov process subordinated to a Poissonprocess. The details of the Markovprocess are from a little data aboutdetection radii and the probabilities at adetection, one dies, the other dies, bothdie, or neither die.
That's all there was to the set up of theproblem, the model.
Then to evaluate the model, just use MonteCarlo to run off, say, 500 sample paths,average those, appeal to the strong law oflarge numbers, and presto, bingo, done.Also can easily put up some confidenceintervals.
The customers were happy.
Try to do that analysis with big data andmachine learning and will be in deep,bubbling, smelly, reeking, flaming, blackand orange, toxic sticky stuff.
So, a little applied math, some firstprinciples of physical science, or somesolid engineering data commonly totallyknocks the socks off machine learning asin the OP.
There's quite a few others but these were the most readily available papers.
Are deep nets AGI? No, but they're a lot better than Mr.Chollet gives them credit for.
Yes, but that's what human's do too, only much much better from the generalized perspective.
I think that fundamentally this IS the paradigm for AGI, but we are in the pre-infant days of optimization across the board (data, efficiency, tagging etc...).
So I wholeheartedly agree with the post, that we shouldn't cheer yet, but we should also recognize that we are on the right track.
I say all this because prior to getting into DL and more specifically Reinforcement Learning (which is WAY under studied IMO), I was working with Bayesian Expert Systems as a path to AI/AGI. RL totally transformed how I saw the problem and in my mind offers a concrete pathway to AGI.
Why make the video at all? If I were them I would scrap all the bullshit and just spend the two minutes in the video explaining the basics of WHY and HOW it works. People visiting that site already know what a "decentralized bitcoin exchange" is.
The problem with assuming good faith and using actor reputation (even third-party arbitrated) is that, in becoming a trusted actor, the amount of money available for cut-and-run scenarios increases exponentially (both for arbitrator and actor), until it ultimately makes sense and happens (eg. numerous scam darknet markets, etc.)... often the claim is "sorry we got hacked!"
Using real world user identities as insurance has the issue that using one's fiat bank account to perform automated or semi-automated trades on behalf of others is probably dubious to against terms of service, or at a minimum vaguely arguably so when politically expedient. Therefore, revealing the real world user identity of an accused bad actor (ie. fiat account holder) as an insurance against bad behavior is likely to expose them to an undue scale of legal hassle and/or asset seizure, which is not something wise to trust a third party with no matter how trustworthy the arbitrators are supposed to be.
My gut feeling is that such systems work only at small scale, with a veneer of trust that can be established in different ways: deposit is placed with counterparty, reputation within some shared community, mafia boss will murder you if you rip off the system, etc. Between absolute strangers, it is exceptionally difficult to reliably scale, even if you can establish it.
Finally, an important point is that frequent <1BTC transfer activity to random destinations on conventional fiat accounts are likely to trigger bank anti money laundering (AML) heuristics.
It would be fun to ask the Devs some questions.
eg: if peers are able to select their arbitrators, how do you prevent a peer and & arbitrator from gaming the system. There is a secondary arbitrator but from the docs it looks like after the initial arbitration the funds are released.
Is there a way to protect against root DHT node hijack? Only refernce I see to this is a TODO: See how btc does this.
1. There are two sides to a trade, Alice and Bob. 2. Alice has USD and Bob has Bitcoins. 3. Both sides wish to trade money but they don't trust each other. 4. To do this, they deposit collateral in the form of Bitcoins into a escrow account (multiple mediators need to sign to give back the collateral to their owners.) This is a bond separate from the money they are already trading. 5. Alice sends her USD to Bob. 6. Bob sends his Bitcoins to Alice. 7. If either side cheated the mediators won't sign the "check" to release funds from the Escrow account. Therefore, so long as the value of the collateral is worth more than their potential profit from scamming -- there is no incentive to scam.
As you can see this scheme has a few problems:
1. Users are required to have Bitcoins for collateral. So if you don't already have Bitcoins you can't buy any (strange scenario.) 2. It relies on collateral, period, so you can never buy and sell the full amount of funds that you have. 3. Liquidity is poor. BitSquare could be improved if they had more investors and structured the exchange to provide liquidity themselves at a premium. 4. It's unclear how secure the notaries are and whether or not it can be cheated. 5. Reputation isn't that secure and the model doesn't account for attackers, though I think BitSquare solves this with multiple mediators.
It's good to see that BitSquare is still around though. Decentralized exchanges haven't had much adoption so far and I haven't seen anyone who nailed every usability problem that these exchanges have. Even assets on Ethereum where you can literally write simple code that says "a transfer occurs if two users agree to it" are traded on "decentralized exchanges" with multiple vulnerabilities and bad UX for traders.
Like localbitcoin but no need to meet up
"I would rather my kids grow up without a Dad than live without my adrenaline fix"
I am neither a father or a cave diver though, so I might be missing a piece of the puzzle. Would either groups of people care to comment?
Most of the training focuses on systems, skills repetition, and understanding and using redundant systems - folks getting into cave diving typically are already extremely experienced divers who if anything need only some minor skill tweaks - most cave instructors will not take on students who don't already have significant open water technical diving experience (multiple tanks, mixed gas, rebreathers, decompression, wreck, etc).
A running joke is that the lost line drill (where you're placed intentionally off of the guide line and have to find it without a mask/light/visibility) is the most punctual cave task you'll ever do - you have the rest of your life to get it right.
Here's a few good books on it (non-affiliate links):
Caverns Measureless to Man by Sheck Exley (the father of cave diving): https://www.amazon.com/Caverns-Measureless-Man-Sheck-Exley/d...
The Darkness Beckons by Martyn Farr: https://www.amazon.com/Darkness-Beckons-History-Development-...
Beyond the Deep by Bill Stone (the Tony Stark of cave diving): https://www.amazon.com/Beyond-Deep-Deadly-Descent-Treacherou...
The Cenotes of the Riviera Maya by Steve Gerrard (patron saint / mapper of Yucatan caves): https://www.amazon.com/Cenotes-Riviera-Maya-2016/dp/16821340...This is more of a map and explanatory notes but gives great insight into the complexity of it. Currently there are 2 systems that almost all cenotes are part of in the Yucatan, and there's some really interesting work going on trying to link the two. Current work is going on at about 180m depth through a number of rooms at the back of "The Pit", and there are multi-day expeditions going on trying to find the linkage.
Can someone explain this phenomena? How can the water in a sea-cave become potable?
I cave dive on a regular basis with two other guys. We've dived together as a team for nearly 10 years. I'm late 60s and single, the second guy is 50s and has a partner but no children, and the third is early 40s with a six-year-old, who he has every intention of seeing grow-up into adulthood.
We often dive in a system comprising a complex maze of 8kms of underwater tunnels. Some are large, and would fit several divers across, but some are small, and you can barely squeeze through. The only entry to and exit from this system is a small pond, about 6 feet across and 4 feet deep, just big enough for one person to get in at a time. Then you scrunch yourself up, and drop down through a slot to enter the system.
We'd generally go about 700m into this system, making up to 13 seperate navigational decisions (left? right? straight ahead?) which we have to reverse precisely to get back out at the end. This is all completely underwater, there's no air anywhere except for two air pockets hundreds of meters apart. As I like to say, in cave diving there is no UP there is only OUT!
It all sounds pretty dangerous, right? Wrong.
NAVIGATION. The whole system is set up with fixed lines, each of which has a numbered marker every 50m or so. Before each dive, we consult the map, and plan exactly where we're going to go. I commit that plan to memory, write it down on a wrist slate, and also in a notebook which I take underwater. All three of us do this independently. Underwater, when we come to a junction, each of us checks the direction to go, then marks the exit direction with a personal marker. If anyone makes a mistake, for example, turns in the wrong direction, or forgets to leave a personal marker, the other two pick that up immediately. On the way back, when we get to each junction, each of us checks that it's the junction we expected, and we can see our personal markers. Each individual's markers can be distinguished by feel alone, so we could get the whole way back, separately, in total darkness, if we had to. So the odds of us getting lost in the system are very low.
LIGHT. These caves are absolutely pitch black, so naturally you need a torch. In fact, nine torches! Each of us individually has a multi-thousand-lumen canister battery light, plus 2 backup torches, each of which would last the whole dive. I could also navigate by the light of my dive computer screen, and I'm considering carrying a cyalume chemical lightstick as well. So then I personally would have five different sources of light, and we'd need 11 sources of light to fail before the team would be left in the dark. The risk of this happening is basically zero.
GAS. Each of us has two tanks in a fully redundant setup. If one side fails, we just go to the other and call the dive. In fact, our gas planning allows one diver's entire gas supply to fail, at the point of maximum penetration, and either one of the other two divers could get that guy back, plus himself, without relying on the third diver at all. However, gas is certainly a limited resource underwater, so it's always on our minds, and all three of us will turn the dive as soon as anyone hits their safety limit.
There's lots more equipment involved, but let's leave it there for the moment, and turn our attention to...
DRIVING! Each of us lives >400 km away from that system. So there and back is a five hour drive. During that drive, you could fall asleep and run off the road; have local fauna run out in front of your car; get head-on crashed by drunken drivers, and so on. Several of those are external risks that are not under our control.
So the simple fact of the matter is this. Our cave dives are almost certainly SIGNIFICANTLY SAFER than driving to and from the dive site! The cave dives carry significant potential risks, but most of those are mitigated with proper training and equipment. Whereas there's not much I can do to stop a drunken driver running head-on into me.
Certainly there are risks like tunnels collapsing and blocking the exit. But statistically, I'm sure that those are orders of magnitude less likely than having a heart attack, or falling over and breaking your neck.
Hope that helps :-)
I guess this is kind of silly and naive, but it's what I would do.
And this is not unique to machine learning, per se. https://fivethirtyeight.com/features/trump-noncitizen-voters... has a great widget that shows that as you get more data, you do not necessarily decrease inherent noise. In fact, it stays very constant. (Granted, this is in large because machine learning has most of its roots in statistics.)
More explicitly, with ML, you are building probabilistic models. This is contrasted to most models folks are used to which are analytic models. That is, you run the calculations for an object moving across the field, and you get something within the measurement bounds that you expected. With a probabilistic model, you get something that is within the bounds of being in line with previous data you have collected.
(None of this is to say this is a bad article. Just a bias to keep in mind as you are reading it. Hopefully, it helps you challenge it.)
Introduction, Regression/Classification, Cost Functions, and Gradient Descent:
Perceptrons, Logistic Regression, and SVMs:
Neural networks & Backpropagation:
Regardless, that post was a great read.
It is far too frequently misunderstood as the science of making certainty from uncertainty.
Even better, you can put priors on the parameters of your model and give it the full Bayesian treatment via MCMC. This avoids overfitting, and gives you information about how strongly your data specifies the model.
most of the time there is no a priori way of determining this
you come to the problem with your own assumptions (or you inherit them) and that guides you (or misguides you)
Either way, I think this is a good introduction for someone who is looking to do very simple things with an IFTTT integration. I don't think a node.js server running a python script inside of a docker container is the best way to go about it. Anyone who is trying to learn how to write integrations to services (such as IFTTT) will get the wrong impression if they try to dissect this code.
You can also write your own private apps that execute on your own infrastructure in whatever language you want.
I've been concocting a bunch of things in AWS Lamdba lately which should be in a service like IFTTT.
Sorry but I could understand if you had to maintain a legacy code base.
Don't get me wrong, the new design looks great, but it feels like every six months they switch between having side navigation and putting everything on top. Every time they change there's a definite improvement, but you'd think they'd be able to come up with a design that works for at least a few years.
Solarized-dark with a white frame can be quite grating on the eyes.
Hope this will be fixed.
Thumbs up for creating such an awesome product and involving the community in your thought process!
Edit: Appears to be the same path but with `about` subdomain
Aside from that, it sounds like a well thought out feature, and it's good that they're redoing it instead of just changing it bit by bit.
Gitlab final prototype: https://about.gitlab.com/images/blogimages/redesigning-gitla...
The resemblance is uncanny.
Can anybody help? Am I simply not technically competent enough to consume this article yet?
It will never be sufficient. A good backbone infrastructure doesn't compensate for the fact that the majority of users don't have ISP choices especially for fast speed fixed/mobile networks.
Such a mind blowing statement. Wonder when (if) they'll hit one-in-three bytes.
"BBR: Congestion-Based Congestion Control" http://queue.acm.org/detail.cfm?id=3022184
And while I agree about overcomplicated routers and box-centric thinking in computer networks, it's pretty much impossible to change things because of the monopolistic nature of the ISP industry. They are very far from competing on the levels of quality where SDN could matter.
V* RY Sex
All very exciting!
I particularly like this language here..."Dive is a tool for interactively exploring up to tens of thousands of multidimensional data points, allowing users to seamlessly switch between a high-level overview and low-level details. ...Dive makes it easy to spot patterns and outliers in complex data sets."https://github.com/pair-code/facets#facets-dive
That's key functionality to drill into our data with powerful navigable dashboards and visualization tools. We're creating this seamless transition with some Python and Flask and Bokeh tooling but nothing as impressive is Facets. But we've cued in all the domain specific things of interest, but it's nice to see a general purpose feature set on display with Facets.
And... I keep waiting for MS to provide an add-in to Excel that will allow ML analysis and similar visualization.
Even better, someone beat MS to it and do one for Libre Calc.
I took a look at the tamarin model, and at least for me it looks pretty much impenetrable (no surprise there). Is it realistic to think that (any?) implementors can use the proven model as the primary reference when implementing the protocol? Especially if you'd strip away the comments, which are not proven to be correct and as such might be misleading?
If yes, what made you prefer it?
Looking at the primitives, isnt WireGuard effectively using the Noise protocol?
But the American kids only know how to deal with foreigners admiring the American way and looking to assimilate into American culture, plus, catering to foreigners is not really in the dictionary of the mightiest country.
Talk about expectation mismatch.
Sorry, no I don't.
It's an important reminder we do not exist in a vacuum and our actions have an indirect impact on a large number of people.
Reminds me of all the other mimicry humans will unconsciously engage in, like mirroring body language or even subtly mirroring an accent in conversation.
COWEN: Final question. The world of social media, we all know its not going away. Maybe it has some problems, but if you were to give a student or a person some piece of advice or intellectual ammunition to carry with them through this worldsome book, some essay, some thoughtso as to make it marginally better rather than marginally worse, what would that be?
LEPORE: Read this E. B. White essay called Death of a Pig.
COWEN: And what does he tell us in Death of a Pig?
LEPORE: A pig dies on his . . . He was in Maine. Hes trying to understand what it means when something dies when you didnt expect it to die and you couldnt save it, and I just find it a very beautiful essay. But I think something is dying, and we cant save it, and thats a good place to start, to figure out how to feel about that.
 - http://www.cbc.ca/news/canada/hamilton/news/animal-activist-...
 - https://www.washingtonpost.com/news/animalia/wp/2017/05/05/j...
The narrator seems to have a lot of investment in the pig's health:
The pig's lot and mine were inextricably bound now, as though the rubber tube were the silver cord. From then until the time of his death I held the pig steadily in the bowl of my mind; the task of trying to deliver him from his misery became a strong obsession. His suffering soon became the embodiment of all earthly wretchedness.
I have written this account in penitence and in grief, as a man who failed to raise his pig, and to explain my deviation from the classic course of so many raised pigs. The grave in the woods is unmarked, but Fred can direct the mourner to it unerringly and with immense good will, and I know he and I shall often revisit it, singly and together, in seasons of reflection and despair, on flagless memorial days of our own choosing.
I had assumed that there could be nothing much wrong with a pig during the months it was being groomed for murder; my confidence in the essential health and endurance of pigs had been strong and deep, particularly in the health of pigs that belonged to me and that were part of my proud scheme.
Is the sense of loss one of not conforming to a rigid script that society sets out? Is it because the narrator has a genuine sense of empathy but just ignores the fact that they'll slaughter and eat the thing they have empathy for? Or is it a statement that we are all eventually bound for a soulless premeditated murder and the only thing we can hope for is a comfortable prison before the time comes?
Or am I just expecting too much introspection from someone who hasn't examined their own motives and actions?
1. The Web Application Hacker's Handbook - It's beginning to show its age, but this is still absolutely the first book I'd point anyone to for learning practical application security.
2. Practical Reverse Engineering - Yep, this is great. As the title implies, it's a good practical guide and will teach many of the "heavy" skills instead of just a platform-specific book targeted to something like iOS. Maybe supplement with a tool-specific book like The IDA Pro Book.
3. Security Engineering - You can probably read either this orThe Art of Software Security Assessment. Both of these are old books, but the core principles are timeless. You absolutely should read one of these, because they are like The Art of Computer Programming for security. Everyone says they have read them, they definitely should read them, and it's evident that almost no one has actually read them.
4. Shellcoder's Handbook - If exploit development if your thing, this will be useful. Use it as a follow-on from a good reverse engineering book.
5. Cryptography Engineering - The first and only book you'll really need to understand how cryptography works if you're a developer. If you want to make cryptography a career, you'll need more; this is still the first book basically anyone should pick up to understand a wide breadth of modern crypto.
You Can Skip:
1. Social Engineering: The Art of Human Hacking - It was okay. I am biased against books that don't have a great deal of technical depth. You can learn a lot of this book by reading online resources and by honestly having common sense. A lot of this book is infosec porn, i.e. "Wow I can't believe that happened." It's not a bad book, per se, it's just not particularly helpful for a lot of technical security. If it interests you, read it; if it doesn't, skip it.
2. The Art of Memory Forensics - Instead of reading this, consider reading The Art of Software Security Assessment (a more rigorous coverage) or Practical Malware Analysis.
3. The Art of Deception - See above for Social Engineering.
4. Applied Cryptography - Cryptography Engineering supersedes this and makes it obsolete, full stop.
What's Not Listed That You Should Consider:
1. Gray Hat Python - In which you are taught to write debuggers, a skill which is a rite of passage for reverse engineering and much of blackbox security analysis.
2. The Art of Software Security Assessment - In which you are taught to find CVEs in rigorous depth. Supplement with resources from the 2010s era.
3. The IDA Pro Book - If you do any significant amount of reverse engineering, you will most likely use IDA Pro (although tools like Hopper are maturing fast). This is the book you'll want to pick up after getting your IDA Pro license.
4. Practical Malware Analysis - Probably the best single book on malware analysis outside of dedicated reverse engineering manuals. This one will take you about as far as any book reasonably can; beyond that you'll need to practice and read walkthroughs from e.g. The Project Zero team and HackerOne Internet Bug Bounty reports.
5. The Tangled Web - Written by Michal Zalewski, Director of Security at Google and author of afl-fuzz. This is the book to read alongside The Web Application Hacker's Handbook. Unlike many of the other books listed here it is a practical defensive book, and it's very actionable. Web developers who want to protect their applications without learning enough to become security consultants should start here.
6. The Mobile Application Hacker's Handbook - The book you'll read after The Web Application Hacker's Handbook to learn about the application security nuances of iOS and Android as opposed to web applications.
I think I've bought 50 books from Humble Bundle (spending about $1/book), but I've only cracked open a few of them.
Also thank you dsacco for the recommendations!
All in all I have to solve the captcha 5 times or so, each time involves marking multiple images.
What sense does this make?
Either they trust the captchas (then they only need one), or they don't (then they should remove them). I've complained about this to them in the past but they haven't changed it.
ProTip: entities like the FSF, the EFF, Wikimedia and many others can be helped via the humble bundle!!
Personally speaking the only books valuable in this bundle are "Practical Reverse Engineering: x86, x64, ARM, Windows Kernel, Reversing Tools, and Obfuscation" and "Applied Cryptography: Protocols, Algorithms and Source Code in C, 20th Anniversary Edition" the other are either quite outdated, too oversimplified or script-kiddie level stuff.
^As long as it's at least $15.
It bothers me that Humble Bundle has so heavily embraced this type of marketing.
What is dynamics to a geometer? Clearly it is important -- because it is important in physics. But I want to understand what mathematicians are getting excited about.
We can talk about a family of geometrical objects foo(t) where the parameter t is a single real value, which stands in for time. But -- other than its relation to the physical world -- why is that $t$ parametrisation important? Why not complex parameters, or multiple parameters, or something completely different?
So, other people dying of cancer not as good motivation?
On a side note, these only pick up AC that are broadcasting ADS-B which is most commercial aircraft and a minority of commuter/personal planes. This will change at the beginning of 2020 when the FAA will mandate all aircraft will need it (or at least anyone planning to ever operate in a mode C ring/near a major city)
VRS displays much more information about flights and about your receiver performance, and is more customizable.
P.S.: I feel like I may have thrown this one out without enough context, so I'm just gonna bloviate a bit in this edit.
The "object" referred to in the OP link -- or what I like to vaguely call the "unit of consistency" -- is a single Starbucks employee. They hopefully have an internally-consistent history of what they believe to be true about the universe. (And stay sane during coffee-rush times.)
The phrase "eventual consistency" describes the relationship between multiple employees. Individuals can drastically disagree with one-another, but there's a framework for detecting disagreements and resolving the discrepancy in some way, even if that just means agreeing to ignore it and logging it for management to fix.
A lot of eventually-consistent systems involve allowing certain kinds of discrepancies to occur, while simultaneously promoting those errors into real business concepts in the domain. Banking and accounting systems are particularly great demonstrations of this, because they started doing it centuries ago when nodes were in cities connected by ink-and-parchments packets on horse-ridden routes.
You write a check and buy something. The banks reconcile and determine you don't have enough money, and your account goes negative.
If it were atomic, your account could never go negative.
One side of the fight (Core / blockstream) wants to scale off-chain, pushing transactions to side-chains and/or lighting networks, and want to profit from off-chain solutions.
The other side of the fight (segwit2x / miners) wants to scale on-chain, making the blocks bigger, and profit from block fees.
Both sides have pros and cons.
Pros of off-chain solutions - more scalable, don't need expensive confirmations for each transaction, more long-term. Cons: the solutions don't exist yet and might be vaporware; segwit etc are just stepping stones.
Pros of on-chain solutions - making the blocks larger can be done now, no need to wait for new software and new networks. Cons - makes the blocks larger, which makes running bitcoin nodes harder. Also cannot scale this way infinitely (you need to keep all the transactions on a disk forever).
The discussion about segwit is in reality just discussion about how to scale, and who profits.
As for me, I don't really care, Bitcoin is inefficient either way
There is a contingency plan in place should the Core-supported User Activated Soft Fork become activated.
Segwit2X has working code, has been tested in beta, and is now in RC.
Without commenting on the merits of the different approaches, the current situation is thrilling to watch as a spectator. To call it a "Civil War" is not an exaggeration.
Any press or "talks" that say otherwise are either being influenced with serious bias or are simply reporting false information.
I like DLT tech, however, if bitcoin has shown us anything it's that once you solve the double-spend problem you're still left with an even more grotesque problem of governance.
People pick fun at ETH since it has a "single leader", but Vitalik is more of a back-seat conductor than a "grand leader". Also, most arguments of "bitcoin being a truly decentralized platform because our devs are decentralized" can easily be diffused by vaguely looking into how BlockStream operates...
The political shit-storm being paraded by BTC needs to end soon, we really don't need another 2-3 years of douchey BTC core devs arguing on the internet and bad-mouthing any project that isn't BTC.
Compare with how our usual currencies are handled. Behind closed doors with powerful banks or private companies deciding for our governments.
- They have twice as much money (yay!)
- They have twice as much money but the value is split, so it's worth approximately the same.
- One of the branches wins or mostly wins.
- The split does so much damage that some (all?) value of coins is lost.
Miners: Hashing power has little influence. As long as there are miners, and two chains rejecting each other transactions will be processed. At first, transactions processing might take a while, but difficulty will adapt. This will create two legitimate currencies. Now everybody in possession of 1 BTC would have 1 BTCa + 1 BTCb.
Exchanges: Little power. They will trade both BTCa and BTCb, and accept commissions.
Trader of goods, in embedded devices: They might have to modify their client to accept both currencies, but they would have to follow the market rates. Otherwise they would have to suffer income loss from people using them to profit from arbitrating the markets.
BTC-rich individuals: They have now 1 BTCa + 1 BTCb. But there is transaction replayability. If they spend 1 BTCa, their BTCb can also get spend the same way. And they lose their BTCb. Chains have a strategic advantage to replay transactions getting to the other one because: 1) they get to keep the commission, 2) they ascertain themselves as more encompassing economically (not sure on this one maybe, they want to stay neutral).
Now, if BTC-holders can wallet-emptying-double-spend them to 2 different addresses they control on the 2 chains. And, compared to the ones who got their transaction replayed, they have kept both their BTCa and BTCb.
TL;DR: IMHO, come the technical fork, some BTC-holders will be tumbling until they irrevocably acquire their BTCa + BTCb, and use them to make runs on the markets, effectively materializing the economic fork.
I'd love the opinion of someone who lived through the ETH-ETC split, especially about the transaction replayability part.
A while back there was a BTC marketplace where among other things, I spent 1 BTC on a steam key for the game Portal (a poor trade in hindsight).
But they shut down and the only other place that I can think of that accepts BTC is humblebundle.com - and presumably they convert it to USD right away.
> Bitcoin payments have been disabled for the Humble Capcom Rising Bundle.
So, yea, who accepts BTC right now?
It's kind of odd that there is still so much FUD about segwit, as it has already activated on LTC. It hasn't appeared to open any security holes.
Personally it seems like smart contracts and other similar services beget an ecosystem that could swell the market cap by a significant amount, I assume miners would have a long term goal of doing just that.
As a disclaimer, I own Bitcoin, but I'm definitely a layman and I don't really have a horse in the race. What I'm most concerned is what these changes are going to accomplish when looking back 10 years from now. I'm in BTC for the long-term, and this whole thing stinks of petty bias and tribal power plays.
Here is some background:
https://bitcoinmagazine.com/articles/bitcoin-unlimited-miner... or https://medium.com/@WhalePanda/verified-chatlogs-why-jihan-a...
how about some peer review yo, https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017... and https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017...
The real problem is simply that the blocksize is way too small. At peak daily loads we are trying to put 20MB of transactions into a single 1MB block. Of course the unprocessed transactions pool up in the 'mempool' waiting for the next block, and are eventually cleared later in the day in off-peak times.
The reason they don't just pool up indefinitely, and crash the server, is due to economics - people pay higher transaction fees to get their important transactions into the next block. Miners earn part of their income from those fees, so they put the best paying transactions into the block first.
Most people who might like to use Bitcoin to pay for actual things, will balk at paying 3$ to send 500$, which means less people use the system, or they only use it for important big trades - thus, an equilibrium is set up where transaction volume is kept low.
Keep in mind bitcoin blocks occur on average every 10 minutes. A global rate of 3 trans/sec is clearly not a large number for a system used by millions, all across the globe.
Litecoin has the same architecture, but doesn't have this bottleneck problem - they have 3/4 the blocksize, process 4x as frequently and handle less that 1/10th the volume of transactions. So there is no mempool, fees are low etc.
The max blocksize is set to 1MB in code [ think #define or static const ], so increasing it means releasing new software - old versions will not be able to process large blocks, so this means a "hard fork".
I would argue that a blocksize increase is urgently needed and justifies a prudent hardfork - because it is currently preventing Bitcoin from growing. Not only do we need a 2MB block yesterday [ some say 8MB ], but we need a clear block size upgrade schedule for the next few years so Bitcoin can handle steady growth, without the need for many future hardforks.
Blocksize increase over the next few years could yield a 20x to 200x increase in throughput using the current architecture ... this releases the stranglehold on transaction flow and user growth, and buys time to build out all the other nice new technologies that can augment, or scale beyond, the linear architecture of the blockchain.
This issue has been delayed and debated for 2.5 years, so now it really is urgent and people on both sides are pretty angry. Sadly, its metastasized into an ugly political civil war .. but I think at heart it is a fairly normal engineering issue that could have been resolved routinely. Maybe having a ton of cash riding on your code makes easy choices hard.
But for Bloomberg to use a 'civil war' hyperbole signals fear from the establishment. Established capital more specifically. And really, that is bitcoin's biggest threat.
Disclosure: I own bitcoin.
In particular, here's their R version:
I feel like lots of folks in the R community secretly or not-so-secretly pine for lisp. My own "someday project" is to implement some portion of R in Racket"Arket". Of course all the native libraries that have been wrapped in R are the tricky bit.
If you want to open a lisp repl inside R you can just open an R session and write:
That's all... use intern true if you want to save the session to an R object
foo <-system("clisp", intern=TRUE)
For the mobile network:
- the coverage of 4G is ... scarce
> In Italy, almost all of the population can benefit from mobile Internet connectivity services over the 2G network, namely Global System for Mobile Communications (GSM), GPRS (General Packet Radio Service) and EDGE (Enhanced Data rates For GSM Evolution). Next to the full coverage is HSPA (High Speed Packet Access) technology, while the implementation of HSPA + (HSPA Evolution) and 4G LTE (Long Term Evolution) solutions is still to be completed. [https://www.sostariffe.it/news/copertura-rete-mobile-in-ital...]
- the practice of trying to force users into accepting bundles that include online streaming of music and/or movies, subscriptions to the fixed network, newspapers, movie tickets, various amenities is hated by many. [https://www.tim.it]
- not having a internet-dedicated monthly fee is a lack, especially given the exaggerated costs in exchange for a low data threshold
For the fixed network:
- having a guaranteed minimum bandwidth is a utopia and the real upload bandwidth is around 20% of the declared
- the depeering has been dictated by anti-competitive strategies rather than technical reasons [http://blog.bofh.it/id_432]
In addition, it is implied that 5G will require usage of 28, 37, and 39 GHz, which LTE Advanced Pro does not currently have profiles for.
What San Marino is doing is building a current gen 4G network (ie, 4.9G or whatever they're calling it), allowing as many feature of LTE Advanced as possible (including complex MIMO), so in a few years most cell phones will have caught up to make effective use of it.
Also, as a side note, LTE Advanced Pro was introduced in 3GPP release 13 (early 2016), and anything that meets the requirements for 5G will not be until release 15 (most likely next year).
With nearby Tesla cars and SolarCity roofs giving your phone 5g with 25ms latency. It could happen. Soon you will be able to buy a VPS in space to halve that.
One day this space based network is going to come online and it will supercede many terrestrial upgrades, making things like the TelCo 5G roll out not needed after all.
I would not recommend anyone use this for anything other than entertainment.
Emotions are facts. This must be the premise. If you've felt them, they've happened already. The question is not "what are you feeling?" but rather, "what are you going to do about what you've felt?"
So as long as you're not confusing what you can change, you're good. You can prepare yourself and educate yourself for when you could feel something next time. But this is also already highly automated. Emotions tend to educate themselves. So I find a lot of this is just trusting yourself, and not overreaching into thinking you can change your past or twist the facts.
If you're sad you're sad. If you're happy you're happy.
It's not so much managing your emotions. It's more about managing the moment, and what you wish to do with yourself. It's about managing the things that make you feel things.
If you hate your job, quit your job. Don't manage the hate.
If you love someone, go for it. Don't manage your love.
And so on.
A good book on attending to feelings is Focusing, by Eugene Gendlin.
You really have to go quite far off the beaten path to reach a true dark-sky area. Unless you are living a 20 minute drive from the nearest town of 250 and an hour away from the nearest town of 100k+ then you have some degree of light pollution.
A libertarian is someone who is hell bent on discovering exactly why and how societies choose to govern themselves, the hard way.
Anyone have links/references that name and describe this business style?
Getting yelled at constantly, usually with profane language? On call for weeks at a time with no help? Put in a double bind by management? Put on an impossible task, or a task made impossible? Stack ranked that you don't drink enough? Staying around late trying to look productive?
I wish I could say any of those weren't ubiquitous in SV.
I strongly believe that this kind of toxic culture has no place in any organization. So I'm in no way condoning the culture at Uber or similar SV companies. I for one, never want to work at a place like that. I am curious though, what makes us different?
But I make about 15k USD (if converting currencies) so there's that haha.
I find that astonishing. Outside Silicon Valley, I think that it would be quite unusual to leave $50 billion of plant running overnight, but pinch pennies by not paying anyone to stay up and check the oil levels during the dog watches.
Uber has over 15,000 employees? That seems a lot. That's almost the same headcount as Facebook. Why does Uber have so many people?