After all, most people on mobile spend their time inside apps, probably from some Google competitor like Facebook. Within these apps, they click on links, which increasingly load inside webviews; the framing app collects info on where people go, and uses this to sell targeted advertising. Facebook is a king in this space, and is now the second largest server of internet display ads, after Google.
Google's assault on Facebook's encroachment is twofold: drive people to Google's apps like the Google Now Launcher (now the default launcher on Android) or the Google app present in older versions of Android and available for iOS, and deploy the same content-framing techniques from their own search engine webpage on mobile user-agents, where the competition is most fierce, and they can also position it as legitimate UX improvement -- which, to their credit, is largely true, as bigpub content sites on mobile were usually usability nightmares and cesspits of ads.
I understand that the author and quite a few others are peeved at this behavior and that there's no way of turning it off. But it's really not in Google's best interest to even offer the option, because then many people will just turn it off, encouraged by articles like the author's own last year where he was caught off-guard and before he gained a more nuanced appreciation for what's really going on.
The bottom line is this: Google is inseparable from its ad-serving and adtech business -- it is after all how they make most of their money -- so if you are bothered by their attempts to safeguard their income stream from competitors who have a much easier time curating their own walled garden, you should cease using Google Search on Mobile. There are other alternatives, who may not be as thorough at search, but that's the cost of the tradeoff.
I dont know why I do it, but for some reason it just doesnt feel right to me to consume the content through the AMP. It feels slightly off, and I want the real deal even if it takes a few seconds extra to load."
I have subconsciously been doing the exact same thing for a while now, and I think this quote covers a good deal of public sentiment. It's weird to use AMP, yet slower without it.
Another main issue I have with AMP is that there is no speedy way to check the url, something I do quite frequently. Instead it's just Google's hosting for the site, with the source being only available by clicking on the link icon.
Here's something simpler from a non-developer, average-consumer point of view. I recently began taking BART to work daily (new job). For those who don't know, BART is Bay Area's subway system, and (at least on the east bay side) cell reception is notoriously spotty.
When I'm on the train, which includes 2 hours of my day everyday (unfortunately), I'd be browsing on say Facebook, and look at links that my friends post. Instant articles almost always load successfully (and quickly) and external links to actual sites almost always fails to load or loads insanely slowly.
Yes, when you're at home or in the city with good mobile reception, these things make no sense and you'd rather hit the original site directly. Give them their ad revenue, etc. to support them, right. But for the average consumers who actually have problems like slow internet (like the average joe who rides public transportation and wants to read on their phone), things like AMP and Instant Articles actually help. I can only imagine outside of silicon valley (where I live), how much more significant of a problem slow internet/slow mobile data actually is.
P.S. I don't work at Google or Facebook, and I know this sounds like propaganda, not to mention this is exactly what they would like to tell you as the "selling points" of these features, in order to continue building their walled garden empires. Fully aware of it, but I did want to bring up why they exist and why I even actually like them.
The speed difference on SERPs is the background downloading and (possibly) pre rendering of AMP pages. This functionality could easily be added to browsers, keeping people on their own websites and Google not having control over the content.
We already have <link rel="preload/prefetch"> but how about adding <link rel="prerender" href="http://amp.newswebsite.com/article/etc." />.
This would absolutely give all of the benefits of AMP Cache without Google embracing and extending the web. It's also much simpler to integrate, every single site can choose to benefit from this (not just SERPs) and I don't end up accidentally sending AMP Cache urls to my friends on mobile.
At the risk of sounding like an old fart (I probably do), I fail to understand this frustration of normal mobile users with the so-called slowness of their mobile experience. To quote CK Lewis: "Give it a second! Its going to space! Can you give it a second to get back from space!??"
There are extensions like No Script that can give similar results for other browsers. https://noscript.net/
google amp pages google amp annoying google amp sucks google amp conference
test cache disable maps
This simple test is therefore inconclusive, but my hypothesis is that his search autocomplete hints are, ironically, colored by his search history. The only negative word I got (disabled) is much more neutral.
Now that I think about it, duckduckgo's "no tracking" isn't just valuable for privacy. It's also valuable for consistent search results across computers without yielding even more information (logging in etc). A few times I made a query and found something useful and surprising, and then I wasn't able to replicate the query on another computer to show someone else. In any case I'd hate to miss a rare interesting page because Google thought that extra 10 pages about Linux might interest me more.
Although there is much to be concerned about Google's ever-expanding reach into the daily life of a good portion of the planet, I think web proponents have more to fear from the likes of FB, Apple, and others appearing on the horizon. These companies are mostly succeeding at meeting current UX expectations (performance, standardization, ease-of-use), and in doing so they are capturing eyeballs away from the web. It's possible some of those who have left for these walled gardens may not return.
The AMP saga has pretty clearly shown that users care about content while Web developers only care about URLs and what goes over the wire. This is a huge disconnect. It doesn't help that many Web developers show no empathy for the users' viewpoint.
Ultimately it probably is easier for Google to add an opt-out to appease a very small, very vocal minority than to educate them that the URL doesn't matter.
Marketing has taken the lead in corporate websites projects to the detriment of the end-users, AMP puts the user in the center.
Currently with AMP Google gets not only your traffic but they get your content on their own domains (which makes all content look like the same trustworthiness) and, at the same time, they mark sites that have AMP available in their search results thusly weighting those results differently because it can train users to click on those more.
Ultimately this is bad for everyone but Google.
However, if it was a framework / set of tools we could create our own AMP pages and simply put them on our own DNS. Google's cache is really the only unique thing going on here and we wouldn't have to worry about sharing trust.
Reminds me of this:http://blackhat.com/media/bh-usa-97/blackhat-eetimes.html
What did happen though, is that i found google results a lot worse on mobile, and ended up not searching for stuff on my mobile. Google results really look like a mess on mobile now...
They really went from minimalist zen to baroque indian arabesque over the year...
The agency I worked at it was a huge problem because back then clients and business people still used AOL and would see the jacked up versions of their site. There was literally nothing you could do, they did it to small and large sites without abandon.
AMP reminds me a bit of that type of setup with AOL re-compressing and crunching down sites through their network. I agree with Google on doing this for email for security but not necessarily websites. AMP to me is quite annoying and in general a bad move.
HN: But the open Internet!
Users: What's that?
HN: Normal websites!
Users: Like...the really slow ones? With all the annoying popovers? And pages that take forever to load? And for some reason cause my fancy new phone to slow to a crawl?
HN: Well, those websites should rewrite their entire codebase to be faster.
Users: That doesn't help me, though.
HN: Trust in the free market! The problem is you, the user, who just needs to exert more pressure on website purveyors so they'll make performant web sites.
Users: You mean, like, preferring websites that offer faster experiences? Okay. Continues to use AMP.
Do you only see them when doing a Google search?
If they solved the URL issue somehow(even if faking the address bar), and had original and AMP links in search; it would probably reduce the antiAMP argument quite a bit. Which both seem to be just UI issues.
As a developer I'm not a fan. It's another thing to manage and maintain. And the last time I checked once you can't leave without some serious consequences.
As a marketer I like the increased CTR but dislike the higher bounce rate and limited features.
I don't think I've ever seen an AMP-enabled website, I certainly never noticed any buttons suggesting I visit the original website.
But given the URL format, it should be trivial for a browser extensions to rewrite links or requests from AMP pages to the original. I bet it already exists.
The ticket was closed a few days ago. People dislike stuff like AMP, but we are probably stuck with it, there just isn't much interest in alternatives.
From google news, the top hits are served through amp and I lose about 1/10 of my screen area to a pointless blue "bar" underneath safari's address bar. This loss of screen space is the only reason I object to amp.
The really scary part was after dialing the number and encountering the operator, we were unable to hang up (any time we hung up and picked back up, the operator was still there, even after waiting about two minutes). Fortunately this was (a) at MIT which still had a central electromechanical telephone switch for student phone lines in the '80s and (b) I had keys to the switch as a student phone repair tech.
I still remember grabbing my keys, running over to the switch, and physically pulling the relay contacts to release the call and prevent a trace to our location in case that was the motivation for holding the line (nowadays traces are digital and instantaneous, but when looking at old-school electromechanical switches you really did need time to trace the call physically through the relays).
Yes, we were aware the operator was probably just messing with us by showing he could hold our line against our will to discourage us from calling again, but it still scared the crap out of us just in case.
There is also another service called WPS for cell phones where you get priority just by prefixing your number with *272, the only catch there is your specific phone needs to be enrolled.
For example, this PDF explains a lot than anything present on HN or Wikipedia:http://chicagofirstdocs.org/resources/060912-GETS.pdf
Here's a doc that covers all US Federal emergency communications:https://www.dhs.gov/sites/default/files/publications/nifog-v...
But yeah - it's all the luck of the draw. Some phone people have had varying levels of luck with other things involving that area code as well: http://www.binrev.com/forums/index.php?/topic/48478-weird-71...
I originally discovered this guy from HN and the audio recordings on that site are mesmerizing to me.
I suspect it got killed off because so many businesses were switching to cheapo, poorly-made, Winmodem-based PBXes that didn't recognize the area code.
If a number was not an active customer it was put in an outbound call list to solicit long distance.
The best story i remember was when the navy wanted to know why we called one of their nuclear submarines. This implied that the right 10 random digits contacted a sub.
808-248-0002 - "Your GETS call is being processed. Please hold."
I feel bad for that operator
> Note that Haskell type synonyms reverse the order of the types compared to C++.
This is true in the context of the article when compared with the presented typedef definitions, however the modern way to do type synonyms in C++ is via using declarations, which are very similar to the Haskell ones.
using Path = string; using PathSet = set<Path>;
How do the 2 parsers compare w.r.t error messaging?
Haskell has Parsec/Megaparsec which have better error messages, but are extremely slow.
It doesn't really seem like grep to me. grep takes 2 inputs: a text and a search string. bingrep only takes one input, a binary. Without a search string it's hard to really say this is like grep.
It seems similar to objdump but with somewhat differing information and with coloring.
I can see how it fills a gap. I am not very often examining binaries, so I can be wrong about it but am i wrong in assuming that objdump will simply list the parts it manages to interpret from a file and silently ignores gibberish or unsupported sections?
I have alway wanted an ability to examine a binary files in a way a bit more interpreted than an hex editor, but without missing any "gibberish" part.
I can see that tool as a nice addition to a binary forensics toolbox
I wrote it because i couldn't implement Consistent Hashing With Bounded Loads over any pre-existing golang consistent hashing packages in a clean way.
Here's the ugly code that made me into doing it: https://github.com/lafikl/liblb/blob/c9c4544834ac7ae7fa6a9cd...
This was striking to read; it's absolutely impossible for me to imagine a pre-telegraph world with such utterly slow communications where people nevertheless had friends and family separated by years of latency.
It makes me appreciate the significance of the electric telegraph and of long-range communication that is not limited by the speed of any physical vehicle. I recently read a short romance novel (called "Wired Love") published in 1880 about two telegraph operators who meet online (well, on the wire), use spare time on the wire to talk (and flirt) with each other; and eventually fall in love -- before even having met each other IRL or knowing each other's IRL names! There's even a quite modern impersonation that happens -- someone else steals the identifier of the operator the main character is in love with, and proceeds to be an rude asshole to her. Almost like IRC nick stealing, except a century or so earlier! The antics involving the telegraphs were, amusingly enough, the least antiquated part of the novel, as there's an entirely shocking amount of aspects of Internet communication/relationship practices that have pretty clear equivalents in that telegraph-era book. For example, the main character disdains the telephone and prefers the elegant and more technically-involved telegraph, she and her partner "clasp hands" over the wire in the same way people do "/me hugs" on IRC, she gets called crazy for laughing to herself and "smiling at vacancy" while telegraphing with her online lover -- and after they've met IRL, her suitor even installs a private telegraph wire from her bedroom to his. There's something quite endearing about reading an old novel and realising that the people with access to real-time text chat more than a century ago might have used it in quite similar ways as people use it today.
This is such a beautiful ending to the article.
I think we are experiencing a whole new world right now, where anyone can be almost anywhere by the help of technology. This will of course be discussed in the future who knows how.
Give yourself a day or two every month without computer or mobile phones, code editors, or programming discussions.
Enjoy the life, discover things, spend more time with family.
Edit: Can mods please remove the blog name from the title. It translates to "rose hip soup" and is rather out of context :)
I guess this is the API it uses. It's pretty impressive. Seemingly hundreds of bike sharing systems are on this thing. It uses OSM for the background, and gives you a reading on how many bikes are at a given station.
How they work: 1. Pay a deposit (99 RMB for Ofo, 299 RMB for Mobike) and register2. Scan the QR code to unlock the bike -Mobike will unlock automatically- Ofo will send a pin to your phone that you can use to unlock the bike.3. When you are done, just lock up your bike (rear wheel) and leave it anywhere.
In Shanghai it was common to see incensed security Guards dragging bikes off premises. Bikes definitely do clutter up precious walking space.
It could work in about 440 cities but none of the big companies that lead this wanted this. What would you do with it?
> the worlds 15 biggest public bike shares are ranked. Thirteen of them are in China.
Edit: oh, there is wikipedia article on TLA+.
Noticed a few typos: "there exits"; "adding more states so to it"; "many possible future"; "talking about expression". In note 14, the asterisks are being lost, with the text between them italicized. Note 15 is immediately followed by a comma splice; maybe you wanted "so" after the comma? Note 16 is also screwed up.
(Calling it a night. Will continue at some point. Would love to chat further; email in profile.)
I'd like to see a year-on-year graph of human lives lost in exchange for making livestock grow faster.
Combating multidrug-resistant Gram-negative bacteria with structurally nanoengineered antimicrobial peptide polymershttps://www.nature.com/articles/nmicrobiol2016162
An article for non-expert:http://www.telegraph.co.uk/health-fitness/body/does-this-25-...
"Lam successfully tested the polymer treatment on six different superbugs in the laboratory, and against one strain of bacteria in mice. Even after multiple generations of mutations, the superbugs have proven incapable of fighting back.
We found the polymers to be really good at wiping out bacterial infections, she says. They are actually effective in treating mice infected by antibiotic-resistant bacteria. At the same time, they are quite non-toxic to the healthy cells in the body."
"Professor Greg Qiao, her PhD supervisor, says that Lams project is one of the biggest scientific breakthroughs he had seen in his 20 years at Melbourne university."
Compared to other drugs, antibiotics are relatively easy and cheap to discover or "invent" with modern techniques. Getting them through clinical trials, on the other hand, is not cheap.
Many clinicians should be less loose with antibiotics, sure. But, that won't eliminate resistance. Realistically, when superbugs become common, the incentives pharmaceutical companies face will shift. It's just there will be a lot of morbidity and mortality while we're waiting for their drugs to make it to hospital pharmacies.
Less monoculture, smaller farms, no use of antibiotic for animals. Pure form of greed, they feed them antibiotics pro-actively.
What they seem to be talking around is implementing an app-level CALEA-like capability.
What I think how they think it would work: companies would be made to build lawful targeted intercept capability into their apps, in the same way telephony and other equipment is today. The app developer receives a warrant for an identifier and they're required to split off that traffic and change the keys, or encrypt it twice (the sender/recipient key and an intercept key - one per warrant (this happens with some net and tele warrants now)).
We all know the downsides of this approach, but it isn't technically impossible. What would be impossible is enforcing it, as it is more a regulatory hurdle. It is more possible today because of vertically integrated walled gardens being used for most app distribution - and backed by two of the largest companies in the world who may be susceptible to a compromise (especially as there is the large tax issues hanging over both their heads).
On a scale of how bad things can get - I think warranted targeted surveillance is better than device backdoors which is better than metadata retention which is better than the mass surveillance we have today (leading to cable splitting and DPI, or situations like Lavabit)
I don't see how, even if you're ok with warranted targeted surveillance, how a compromise is made here that doesn't lead to a wack-a-mole game where legitimate users are inconvenienced while the 'bad guys' are pushed onto alternate Android distributions and unofficial apps.
I also don't see how a CALEA-like capability is kept secure and safe - especially with apps (we saw the NSA use CALEA intercept to surveil political targets). Clapper et al always vaguely answer "key escrow" to this question without spelling out how that would work.
With subsequents backdowns in the scope of what these governments are wanting to do (and this latest proposal is again is a minor backdown) we might be reaching the finite conclusive point where comms do go dark and the new reality is that despite all of the tech we have law enforcement mostly relies on human intelligence and they'll have to scale back up for that. 3,500 terror suspects in the UK, 4,000 employees at MI5 - and notably in the recent attacks there were HUMINT warnings.
Yes well, I don't. But hey why not facilitate foreign actors spying on our companies so that we may or may not catch any terrorists?
Forcing firms not to implement end-to-end encryption is forcing firms to implement flaws in their encryption software.
Giving governments the power to perform mass interception and decryption of communication doesn't seem like a sensible way to fight terrorists, even if they say it's only to be used on suspects. Terrorist attacks aren't increasing because the "bad guys" suddenly got their hands on a copy of OpenSSL.
In the case of the most recent attacks, these people were let into the country voluntarily.
The prime minister, Malcolm Turnbull, is a noted user of Signal...
One day these stories will be written by and about people who have a clue. One day...
I want to hear more on this, because so far as reporting has gone on terrorist attacks since 2013... The use of encrypted messaging systems seems conspicuously absent.
What the proposal seems to concentrate is endpoints, where plaintext inevitably exists, and legal protocols for accessing it.
OTOH any sane implementation would only generate plaintext for display purposes, and would clear the RAM as soon as display (or input) is done, so finding the plaintext anywhere may be honestly impossible. At least, without tampering with the software on either end.
I happen to know he uses it quite extensively.
Back in its heyday 3 years ago, I did a ton of courses on Coursera. They weren't perfect, of course. There was no higher-level coordination that could lead to covering an entire 4-year degree's worth of material and it was hard to match up courses from different institutions with different prereqs. It was hard to find advanced courses in general and the enforced speed at which content was expected to be completed sucked.
But the automated graders were great. I went through parts of many, many courses before having to abandon them due to work pressures and I finished a few, like the scala course and the fantastic automata course and some stuff from Berkeley before they bailed and moved to edX. It wasn't ideal for adult independent learners, but Coursera used to provide real value, especially for introducing niche topics that wouldn't be available via OCW.
It's a pity they never figured out a business model that would fit what its learners really wanted and just threw up a paywall instead.
It can be used like CMake (as a build script generator) or like Ninja. For many projects (including building Ninja), it's faster than Ninja. Chromium is a notable exception: it has a huge set of Ninja files, and the Shake parser is not as fast as Ninja's parser.
From the docs:
> [...] compiling LLVM on Windows under mingw they both take the same time to compile initially, and Ninja takes 0.9s for a nothing to do build vs Shake at 0.8s. Shake is slower at parsing Ninja files, so if you have huge .ninja files (e.g. Chromium) Shake will probably be slower. Shake does less work if you don't specify deps, which is probably why it is faster on LLVM (but you should specify deps -- it makes both Shake and Ninja faster). As people report more results I am sure both Shake and Ninja will be optimised.
A Summer of Haskell project is on to migrate GHC to a Shake-based build system.
PS. Wow, Annie Cherkaev's website looks really different now.
"... [in the future, work no longer exists for] most of humanity... That mass of people cannot work, but they can still kill people..."
"and how will they find some sense of meaning in life when they are basically meaningless, worthless?
My best guess at present is a combination of drugs and computer games as a solution for most"
On "one of the big problems with technology":
"It develops much faster than human society and human morality, and this creates a lot of tension. But, again, we can try and learn something from our previous experience with the Industrial Revolution of the 19th century, that actually, you saw very rapid changes in society, not as fast as the changes in technology, but still, amazingly fast.
The most obvious example is the collapse of the family and of the intimate community, and their replacement by the state and the market."
Talk about abstract metaphors that have no meaning.
The core of this argument seems to be:
1) Given a fixed state of a system, you can modify it by apply certain operators on the system.
2) You can model the 'causal structure' by observing changes as you randomly apply operators.
3) High level systems at a macro scale have a greater information density than the sum of their parts.
Ie. In a nutshell, you can have high level (ie. real world) systems that display behaviour that is not just hard to predict from changes to low level systems... but actually impossible to predict from them.
Which is to say, basically asserting that you cannot predict the behaviour of macro systems from microscale systems; eg. you cannot predict the behaviour of a molecule based on its quantum state / make up (clearly false) and you cannot predict the behaviour of say, a person deciding what to have to for lunch based on their quantum state/
...but not that you can't because it's hard.
You can't because its not possible.
Am I misunderstanding?
I think that sounds completely crack pot to me.
The practical insight is that complex systems have some level of scale where causality experiments yield the most fruit, and that this effect is measurable.
The most interesting parts are the two justifications for why this may be true. (1) "the determinism can increase and the degeneracy can decrease at the higher scale (the causal relationships can be stronger)" (2) "Higher-scale relationships can have more information because they are performing error-correction."
I've skimmed the linked article, and will in all likelihood go back to itbut I wonder about some stuff from the conclusion:
> It also provides some insight about the structure of science itself, and why its hierarchical (biology above chemistry, chemistry above physics). This might be because scientists naturally gravitate to where the information about causal structure is greatest, which is where they are rewarded in terms of information for their experiments the most, and this won't always be the ultimate microscale.
I don't see how more information existing at higher levels would explain the hierarchical structure of the sciences: saying there's more information at the higher levels is the reason would imply that e.g. we found biology to be more valuable than physics, whereas the actual situation seems to be that we value these levels equally. Maybe that's just a phrasing issues. In any case, it seems simpler that we organize the sciences hierarchically because the human brain organizes information that way.
I also don't see how there being more information at certain levels is necessarily useful: isn't the quality of the information as important or more important than the quantity? But I guess if the it's specifically 'causal' information, there's an implication (at least for the sciences) of ideal quality...
Me, me! I know a kind of postselection effect that can be explained on a napkin (though nobody knows if it's actually true). As a bonus, it can affect not just the Born probabilities, but the probabilities of anything you choose, even things that already happened. Here's how it works.
The idea is a variation on anthropic reasoning, originally due to Bostrom (http://www.anthropic-principle.com/preprints/cau/paradoxes.h...) If there's a completely fair quantum coin, and many people over many generations decide to have kids iff the coin comes up heads, then the coin might appear biased to us for anthropic reasons (more people in the heads-world than in the tails-world). You can influence all sorts of things this way, like Bostrom's example of Adam and Eve deciding to have kids iff a wounded deer passes by their cave to provide them with food. (That's if anthropic probabilities work in a certain intuitive way. If they work in the other intuitive way, you get other troubling paradoxes in the same vein. All imaginable options lead to weirdness AFAIK.)
A few years back I spent a long time on such problems, and came up with a simple experiment about "spooky mental powers" that doesn't even involve creating new observers. It's completely non-anthropic and could be reproduced in a lab now, but the person inside the experiment will be deeply troubled. Here's how it goes:
You're part of a ten-day experiment. Every night you get an injection that makes you forget what day it is. Every day you must pick between two envelopes, a red one and a blue one. One envelope contains $1000, the other contains nothing. At the end of the experiment, you go home with all the money you've made over the ten days. The kicker is how the envelopes get filled. On the first day, the experimenters flip a coin to choose whether the red or the blue one will contain $1000. On every subsequent day, they put the money in the envelope that you didn't pick on the first day.
So here's the troubling thing. Imagine you're the kind of person who always picks the red envelope on principle. Just by having that preference, while you're inside the experiment, you're forcing the red envelope in front of you to be empty with high probability! Since your mental states over different days are indistinguishable to you, you can choose any randomized strategy of picking the envelope, and see the result of that strategy as if it already happened. In effect, you're sitting in a room with two envelopes, whose contents right now depend not just on what you'll choose right now, but on what randomized strategy you'll use to choose right now. If that's not freaky, I don't know what is.
Going back to Aaronson's original point, the world as it looks to us might easily contain postselection and other weird things. Reducing everything to microstates is a valid way to look at the universe, but you aren't a microstate. You are an observer, a big complicated pattern that exists in many copies throughout the microstate, and the decisions of some copies might affect the probabilities observed by other copies at other times. The effects of such weirdness are small in practice, but unavoidable if you want a correct probabilistic theory of everything you observe (or a theory of decision-making for programs that can be copied, which is how I arrived at the problem).
I'd love to see some good tutorials, or websites that could help me get a handle on that side of things so that I can start to create useful tooling using it.
Django is a web framework, why does it matter so much in this project?
If you didn't know the specific piece, you could still get a point by guessing the nationality of the composer and the time it was written within 10 years. The available time period for music was anything from Pythagoras' time to the present day (would have been late 80s at the time we were playing.)
It ha to be reasonably describable as "classical" music. Not rock, jazz or blues or anything.
It was a hard fucking game for an 8-year-old, and we really worked each other over very hard.
I wonder if I could fork this and try to get it to tell the difference between, for example, Bach and Corelli, Mahler and Bruckner, or Zarlino and Palestrina.
Very cool project.
There could be some interesting ideas in there too, but that is not the main meat of literature.
I think its important for the sake of intellectual rigour not to confuse the two.
Really struck me in the feels. Rest In Peace.
..."i apologize for the length of this letter; if i'd had more time, it would have been shorter"
I feel like he had the inklings of some good ideas in there, but none of them fleshed out. As all ideas about life are when you're 20 :)
The people to look up to, are those you relate to, who are past 50. Who've raised children, who know how to communicate to younger people AND have a wealth of life experience and hopefully wisdom.
No 20 year old has ever given me solid life advice - you just haven't lived long enough.
I don't think you can learn from positive examples exclusively. To solve a nonogram, you need to mark squares that are black and those that ARE NOT black for sure. Another example, survivorship bias. In WW2 the British were sending bombers to Berlin and other German cities. Engineers were tasked with putting more armor plating on bombers. They examined where round (bullet) holes clustered on the returning bombers, and added extra armor in the biggest clusters. Fewest holes were found around the fuel tank and pilot's cabin, and those got no extra protection. It was a perfectly rational decision they made based on available data. But they could learn a lot from losers.
Being wise, or intelligent, is not following some great personas. It's forming insight based on your observations. Hunter S. Thompson's advice may be sound, but it would be equally sound if he was a garbage collector. That you must get such advice from him, suggests, sadly, that you can't recognize it when you see it. (I'm not saying I'm better)
If there's one lawyer in town, they drive a Chevrolet. If there are two lawyers in town, they both drive Cadillacs.
Basically, there are two approaches the plaintiff might take here. The simplest is to cite the doctrine of equivalents. This is basically the notion that if you do the same thing in the same way for the same purpose, then it's the same process, even though you are using digital instructions instead of logic gates. The legal theory here is pretty well settled. The problem is that you'd need to justify that digital instructions are obviously equivalent to logic gates, and a skilled professional would have equated them at the time of the patent's filing.
The other approach is to argue that an emulator actually is a processor, and therefore fits the literal claims of the patent. The explanation for this is pretty well-established: it's literally the Church-Turing Thesis. However, the viability of this argument depends on the language of the patent claims. Also, it's hard enough to explain the C-T Thesis to CS students. My undergrad had an entire 1-credit-equivalent course that basically just covered this and the decidability problem. Explaining it to a judge, who (while likely highly intelligent) probably has no CS background, over the course of litigation is likely to be really hard.
Now, Intel certainly has enough resources to do both of these things (and they may also have precedent to cite, that didn't exist back then or that wasn't relevant to that case). Don't take this as an opinion on any possible result, it's just information such as I remember it.
- https://en.wikipedia.org/wiki/Doctrine_of_equivalents- https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
They no doubt have been filing additional patents over the years. But I'm sure MS and Qualcomm have plenty of their own patents to bargain with.
Also their warning could backfire if it gives Microsoft one more reason to finally walk away from x86 compatibility... not that this is likely to happen anytime soon.
> AMD made SSE2 a mandatory part of its 64-bit AMD64 extension, which means that virtually every chip that's been sold over the last decade or more will include SSE2 support. [...] That's a problem, because the SSE family is also new enoughthe various SSE extensions were introduced between 1999 and 2007that any patents covering it will still be in force.
AMD64 requires SSE2 which was introduced in 2001, right? So isn't it just 1 year until Microsoft can put in what's required for the AMD64 architecture?
Scorched earth policy will likely not be defensible under fair use law. Reverse engineering for compatibility has a few precedents.
I mean, Apple and Samsung had a billion dollar lawsuit while Samsung chips were still in iPhones. It's certainly precedented to sue a corporation you're actively doing business with.
I think this theory of infringement has to run into various thought-experiment problems such as : can I auto-translate that binary into some other instruction set, then execute the translated binary, without infringing Intel patents? (yes, surely) Is the translator now infringing Intel patents because it has to understand their ISA? (no, surely).
Now, can I incorporate that translator into my OS such that it can now execute i386 binaries by translating them to my new instruction set which I can execute either directly or by emulation? If so then I am now not infringing. Or did infringement suddenly manifest because I combined two non-infringing things (translator + emulator for my own translated ISA)?
"if WinARM can run Wintel software but still offer lower prices, better battery life, lower weight, or similar, Intel's dominance of the laptop space is no longer assured."
Peter. My man. I laughed. I cried.
For the millionth time, the ARM ISA does not magically confer any sort of performance or efficiency advantage, at least not that matters in the billion+ transistor SoC regime. (I will include some relevant links to ancient articles of mine about magical ARM performance elves later.) ARM processors are more power efficient because they do less work per unit time. Once they're as performant as x86, they'll be operating in roughly the same power envelope. (Spare the Geekbench scores... I can't even. I have ancient published rants about that, too).
Anyway, given that all of this is the case, it is preposterous to imagine that an ARM processor that's running emulated(!!!) x86 code will be at anything but a serious performance/watt disadvantage over a comparable x86 part.
This brings me to another point: Transmeta didn't die because of patents. Transmeta died because "let's run x86 in emulation" is not a long-term business plan, for anybody. It sucks. I have ancient published rants on this topic, too, but the nutshell is that when you run code in emulation, you have to take up a bunch of cache space and bus bandwidth with the translated code, and those two things are extremely important for performance. You just can't be translating code and then stashing it in valuable close-to-the-decoder memory and/or shuffling it around the memory hierarchy without taking a major hit.
So to recap, x86 emulation on ARM is not a threat to Intel's performance/watt proposition -- not even a little teensy bit in any universe where the present laws of physics apply. To think otherwise is to believe untrue and magical things about ISAs.
HOWEVER, x86-on-ARM via emulation could still be a threat to Intel in a world where, despite its disadvantages, it's still Good Enough to be worth doing for systems integrators who would love to stop propping up Intel's fat fat fat margins and jump over to the much cheaper (i.e. non-monopoly) ARM world. Microsoft, Apple, and pretty much anybody who's sick of paying Intel's markup on CPUs (by which I mean, they'd rather charge the same price and pocket that money themselves) would like to be able to say sayonara to x86.
The ARM smart device world looks mighty good, because there are a bunch of places where you can buy ARM parts, and prices (and ARM vendor margins) are low. It's paradise compared to x86 land, from a unit cost perspective.
Finally, I'll end on a political note. It has been an eternity since there was a real anti-trust action taken against a major industry. Look at the amount of consolidation across various industries that has gone totally uncontested in the past 20 years. In our present political environment, an anti-trust action over x86 lock-in just isn't a realistic possibility, no matter how egregious the situation gets.
So Intel is very much in a position to fight as dirty as they need to in order to prevent systems integrators from moving to ARM and using emulation as a bridge. I read this blog post of theirs in that light -- they're putting everyone on notice that the old days of antitrust fears are long gone (for airlines, pharma, telecom... everybody, really), so they're going to move to protect their business accordingly.
Edit: forgot the links. In previous comments on exactly this issue I've included multiple, but here's a good one and I'll leave it at that: https://arstechnica.com/business/2011/02/nvidia-30-and-the-r...
And unless Qualcomm and Microsoft are working on a Hardware assisteed X86 emulation, this warning shot may be directed at somebody else.
My guess: Apple.
QEMU emulates X86 chips as does other emulators. I wonder how those are effected?
It's quite possible I'm missing something vital here, of course.
Okay, got it. I'll make sure to account for that in my next CPU/device purchase.
AMD licenses x86 patents to Qualcomm/MS to make x86 emulator better patent troll proof. In return, Qualcomm and AMD team up for better ARM server based processors. MS can sell more Windows/Windows Sever (sad).
I would love to see Dell, Lenovo and HP to switch exclusivly to Ryzen processors,
And switch to the new Naples CPU in all their Server/Storage systems
The raven preferred the trainer that gave something. Not really cheating or holding a grudge.
The National Geographic domain gave it away.
Sorry, couldn't resist the corniness.
2. Feed the AEAD into Poly1305 in the same way.IIRC libsodium hashes everything in this order:
AEAD padded with zeros, so length will be a multiple of 16 message padded with zeros in the same way AEAD length message length
I sometimes wonder if Steve Jobs had still been alive, would he have been revered as much as he was in the 2010s? His management style would have sooner or later caused permanent damage to someone you and I personally know. And that's where we'd have drawn the line.
Default Audio Player for mp3 files is not set up, like vlc or something.
when you try to run it, note that Stephanie depends on the python module os.startfile which apparently is a Windows only thing that works like xdg-open. I'm not a python expert but here's what I did: the offending file is stephanie-va-master/Stephanie/TextManager/speaker.py. If you edit line 15 which is:
and change it to:
os.system("xdg-open " + self.speak_result)
everything works.Somebody else more knowledgeable can probably suggest something better.
Edit: Fixed! They fixed it by wrapping that line in an if statement checking the output of sys.platform. Very responsive dev.
I'm not one to be a prude... but, is this really necessary?
Hope this project keeps growing!
How does apple not expect that annoying developers with their app store process (so much so that things like this exist: https://fastlane.tools/), AND charging them 30% AND apparently not actually reviewing anything about the apps making it into their store isn't going to eventually drive people away from it?
(Why yes, I am cranky over the amount of hoops I had to jump through to get to the point of asking apple for permission to put my beta on my co-founder's iPhone)
#2 - Average computer/phone users are willfully ignorant. I would say stupid, but that's a judgement call (even though I think it's true). Someone with knowledge can advise them, but they cannot be bothered with all that fuss. They'd rather ignore sound advice and push buttons. After all, look at the who runs the country and the complacence of many of its people.
Have you ever had a friend who was a lawyer? Did you ever get some traffic ticket and think, "Hey, I'll ask Bob if he can help me handle this!"? I'm guilty of this once in a while. But "average users" are guilty of doing this to technical people all the fucking time. And when we advise them of behaviors to change to avoid future incidents, they nod and agree, but then repeat the stupid behavior later.
Sorry for the rant, but perhaps it's time to just start replying to scammed/screwed users with, "Oh wow, that's really unfortunate. I guess you'll have to go buy a new phone/computer." Maybe that will jar them into actually using their brains.
* Edit for wine-related typos.
Also, do people still use the App Store? I don't think I have casually browsed for apps in 5 years or more.
How long will apple allow this? At the very least it should be impossible to bid on trademarked terms, and no ad should ever outrank an exact match result.
I also had another app that was accepted into the app store then when I pushed an update release I was informed that my logo had to change because it used Apple's camera emoji. I only did this because another popular app did the same thing (down for lunch). In order to stay compliant, I had to change my logo.
I'm fine with said rules existing as in theory they are meant to protect lay customers from junk like this. How on earth did this thing make it through a review process that's so hard on some apps?
I wish Apple would apply it's rules and vetting with more consistency.
I've never done it, either. I clearly remember the only few times I clicked on AdSense ads - once by mistake, and was extremely annoyed at the results (it was a sort of list like search results), and 2-3 times to test my own AdSense ads (yeah, against ToS).
Yet AdSense is raking in billions. I've always wondered who actually clicks on the ads :D
How did this app get through that?
Never, I guess.
Nice into the rabbit hole though, should see how bad it gets with VMs.
 Or so I have heard ... from a friend
I get why people do it, but it's sad that they do.
As a long time Android user (and no I wans't happy for most parts; and I wanted to taste the iOS waters both as an user and a mobile dev) who recently moved to an iPhone SE I feel really disappointed.
Like the overnight train that left me in an empty field some distance from the settlement, the process of economic development has for the most part bypassed the two hundred or so families that make up the village of Palanpur. They have remained poor, even by Indian standards: less than a third of the adults are literate, and most have endured the loss of a child to malnutrition or to illnesses that are long forgotten in other parts of the world. But for the occasional wristwatch, bicycle, or irrigation pump, Palanpur appears to be a timeless backwater, untouched by Indias cutting edge software industry and booming agricultural regions. Seeking to understand why, I approached a sharecropper and his three daughters weeding a small plot. The conversation eventually turned to the fact that Palanpur farmers sow their winter crops several weeks after the date at which yields would be maximized. The farmers do not doubt that earlier planting would give them larger harvests, but no one the farmer explained, is willing to be the first to plant, as the seeds on any lone plot would be quickly eaten by birds. I asked if a large group of farmers, perhaps relatives, had ever agreed to sow earlier, all planting on the same day to minimize losses. If we knew how to do that, he said, looking up from his hoe at me, we would not be poor.
1. how they didn't have pest problems if they planted in fractal patterns
2. but they did have pest problems if they didn't plant at the same time
Could someone kindly explain that in a little more depth?
+ Normalization is beneficial for learning (per unit zero means and unit variance). It can be batch normalization, layer normalization, or weight normalization (if trained layer for layer and previous layer normalized).
+ Perturbations through stochastic gradient descent, stochastic regularization (dropout) does not destroy the normalized properties for CNNs, but it does so for forward nets.
+ Self-normalizing net uses a mapping g: O -> O that maps mean and variance to the next layer for each observation. Iteratively applying this mapping leads to a fixed point.
+ The activation function to do so is not a sigmoid, ReLU, etc. but a function that is linear for positive x and exponential in x for negative x; the scaled exponential linear unit.
+ Intuitively: for negative net inputs the variance is decreased, for positive net inputs the variance is increased.
+ For very negative values the variance decrease is stronger. For inputs close to zero the variance increase is stronger.
+ For large invariance in one layer, the variance gets decreased more in the next layer, and vice versa.
+ Theorem 2 states that the variance can be bounded from above and hence there are not exploding gradients.
+ Theorem 3 states that the variance can be bounded from below and does not vanish.
+ Stochasticity is introduced by a variant on dropout called alpha dropout. This is a type of dropout that leaves mean and variance invariant.
I think the paper gives a nice view on handling gradients in deep nets.
Looks impressive, best or near best on most, but I wish they had bolded best of set.
Still not sure how the regularization squares with the rapid precision fitting to the training set data in Figure 1.
Next time someone claims people don't have a theoretical understanding of how NNs work point them at that.
Fall recovery is great until you realize you can push it away by going Boo!.
Out of the applications, dancing looks most convincing to me. It could serve as a toy / decoration on dance floors and shop displays.
That's because it doesn't factorize the input into separate meaningful parts. The next step in LSTMs will be to operate over relational graphs so they only have to learn function and not structure at the same time. That way they will be able to generalize more between different situations and be much more useful.
Graphs can be represented as adjacency matrices and data as vectors. By multiplying vector with matrix, you can do graph computation. Recurring graph computations are a lot like LSTMs. That's why I think LSTMs are going to become more invariant to permutation and object composition in the future, by using graph data representation instead of flat euclidean vectors, and typed data instead of untyped data. So they are going to become strongly typed, graph RNNs. With such toys we can do visual and text based reasoning, and physical simulation.
Instead of handwaving about "forgetting", it is IMO better to understand the problem of vanishing gradients and how can forget gates actually help with them.
And Jrgen Schmidhuber, the inventor of LSTM, is a co-author of the RHN paper.
It's well understood that CFGs can not be induced from examples. Which accounts for the fact that LSTMs cannot learn "counting" in this manner, nor indeed can any other learning method that learns from examples.
 "Strings generated from"
 The same goes for any formal grammars other than finite ones, as in simpler than regular.
Is anyone working with LSTMs in a production setting? Any tips on what are the biggest challenges?
Jeremy Howard said in fast.ai course that in the applied setting, simpler GRUs work much better and has replaced LSTMs. Comments about this?