I am currently a bit blown away by the reaction to this. I finished this over a year ago but didnt manage to make it public till now. I presented this at a meetup in Berlin yesterday and someone asked for an online version, which I postet to Twitter then things escalated quickly.
So because a lot of people are asking: I am currently redoing my completely outdated homepage. The new one will include an in-depth explanation of what is actually happening there, whats the idea behind it and how I want to apply this to actual web design to make resolution-independent work more easy and less restrictive for both designers and developers.
Until then I will try to answer some questions here
The author then has a renderer to turn these into pixel data. It seems to render them down to an actual pixel image on the fly.
Given the sheer amount of work for one piece, with The Three Graeae appearing to be around 1000 lines of code, I'm also quite amazed he managed to produce seven. Brilliant, and Art indeed.
My faves are 1- Zeus, and 2- Teiresias
Those two look like they took the most work, and are most technically challenging. The Man-Eagle-Bull-Snake morph especially.
Halfway random tangent, and halfway related, but I'm really super looking forward to wider adoption of SVG 1.2 precisely because it adds absolute unit constraints in addition to the relative constraints, so you can do some of the same kind of stuff in an SVG image, and have authoring tools to support it. Not the pixel art side of it, and nowhere near as crazy as this project, but it will still be really useful.
I really like that in "Teiresias", if you make the canvas narrow enough, the man jumping (presumably Teiresias) changes position and gets a white beard instead of white long hair. Just to still give the impression that it's an old sage, in small vertical space.
Do you mean you're tracking visits that hit "view page source"? Does that work? I can't find any info about that on Google.
I'd love to hear about the inspiration behind the project.
Probably world changing, when considering that even semi-technical folks can cook up tools to dig into things like this.
I know this tool was by a developer, but scrapinghub has web UI to make scrapers.
Lobbyists have to follow registration procedures, and their official interactions and contributions are posted to an official database that can be downloaded as bulk XML:
Could they lie? Sure, but in the basic analysis that I've done, they generally don't feel the need to...or rather, things that I would have thought that lobbyists/causes would hide, they don't. Perhaps the consequences of getting caught (e.g. in an investigation that discovers a coverup) far outweigh the annoyance of filing the proper paperwork...having it recorded in a XML database that few people take the time to parse is probably enough obscurity for most situations.
There's also the White House visitor database, which does have some outright admissions, but still contains valuable information if you know how to filter the columns:
But it's also a case (as it is with most data) where having some political knowledge is almost as important as being good at data-wrangling. For example, it's trivial to discover that Rahm Emanuel had few visitors despite is key role, so you'd have to be able to notice than and then take the extra step to find out his workaround:
And then there are the many bespoke systems and logs you can find if you do a little research. The FDA, for example, has a calendar of FDA officials' contacts with outside people...again, it might not contain everything but it's difficult enough to parse that being able to mine it (and having some domain knowledge) will still yield interesting insights: http://www.fda.gov/NewsEvents/MeetingsConferencesWorkshops/P...
There's also OIRA, which I haven't ever looked at but seems to have the same potential of finding underreported links if you have the patience to parse and text mine it: https://www.whitehouse.gov/omb/oira_0910_meetings/
And of course, there's just the good ol FEC contributions database, which at least shows you individuals (and who they work for): https://github.com/datahoarder/fec_individual_donors
This is not to undermine what's described in the OP...but just to show how lucky you are if you're in the U.S. when it comes to dealing with official records. They don't contain everything perhaps but there's definitely enough (nevermind what you can obtain through FOIA by being the first person to ask for things) out there to explore influence and politics without as many technical hurdles.
For developers and managers out there, do you prefer to build your own in-house scrapers or use Scrapy or tools like Mozenda instead? What about import.io and kimono?
I'm asking because lot of developers seem to be adamant against using web scraping tools they didn't develop themselves. Which seems counter productive because you are going into technical debt for an already solved problem.
So developers, what is the perfect web scraping tool you envision?
And it's always a fine balance between people who want to scrape Linkedin to spam people, others looking to do good with the data they scrape, and website owners who get aggressive and threatening when they realize they are getting scraped.
It seems like web scraping is a really shitty business to be in and nobody really wants to pay for it.
We're happy to unban accounts when people give us reason to believe they will post only civil and substantive comments in the future. You're welcome to email email@example.com if that's the case.
I also learned at NSA,we could watch drone videosfrom our desktops.As I saw that, that reallyhardened me to action.- In real time?- In real time.Yeah, you... it'll streama lower quality of the videoto your desktop.Typically you'd be watchingsurveillance dronesas opposed to actuallylike you know murder droneswhere they're going out thereand bomb somebody.But you'll have a drone that's justfollowing somebody's housefor hours and hours.And you won't know who it is,because you don't havethe context for that.But it's just a page,where it's listsand lists of drone feedsin all these different countries,under all these different code names,and you can just click onwhich one you want to see.
He doesn't say explicitly that this includes the U.S., but I made that assumption and here it is: proven.
Military resources, such as The National Guard, get called up for disaster relief all of the time. I think I'd rather drones helping people in these scenarios than what they're primarily used for.
You can locate such "interesting" flights right now, using your browser. Just open up ADSB Exchange Virtual Radar , which doesn't filter out flights with certain squawk codes like other online virtual radar sites do. If you do, select "Menu" from the map (with the gear icon), then "Options". Select the "Filter" tab, select to "Enable filters", select "Interesting" from the dropdown listbox, and select "Add Filter". Now you can zoom out over the country, and see all the "interesting" flights using the table on the left. Note any flights with the "LE" or "FBI" or "DHS" user tag.
Right now, as I write this, an FBI-owned aircraft is circling over the Norwood/Bronx area of NYC , tail number N912EX, registered to OBR Leasing (one of the "shells" that the US Gov't uses for registering its law enforcement aircraft), as mentioned in an AP story last summer 
: http://www.adsbexchange.com, select "Currently Tracking [number] Aircraft on upper-right"
Edit: Another one, flying around NW Los Angeles, right now: http://i.imgur.com/PwRpqRe.png
Edit2: Any aircraft squawking transponder beacon codes between 4401-4433 are engaged in law enforcement operations. More on the various squawk codes reserved by US Gov't operations can be found here (pdf link): http://www.faa.gov/documentLibrary/media/Order/FINAL_Order_7...
I still believe this should have illegal as they had no warrant and it violates the Posse Comitatus Act.
Let's play a guessing game! San Diego? We've got Miramar and Camp Pendleton here, along with crumbling infrastructure and terrible potholes. Although the city certainly doesn't need help finding potholes around here...
Using drones for this kind of thing actually makes a lot of sense, although not a $20 million militarized Predator.
(a) it took a freedom of information request to make this information public.
(b) the Pentagon did it's own internal report and found that there was no wrongdoing.
(c) that nobody in the government is going to hold these clowns responsible or create any sort of legitimate process for determining whether these flights were legal or not.
I'm curious about the flight plan approval and filing process they go through with the FAA.
Also wondering if the flights show up on the 5 minute delayed ASDI API:
And if the drones carry ADS-B transceivers and if they show up on other transceivers during flight. (Which would make them visible/trackable to anyone ground or air based that is listening)
Sometimes, maybe. I'd argue rarely. I don't see much difference between an unmanned drone and an unmanned satellite or a manned helicopter in terms of applicable law.
Believing that a new law is needed because computers/drones/robots/AI/whatever now exist can lead to bad laws, or laws that are out-of-balance in terms of punishment. (i.e., commit a crime - 5 years. commit the same crime WITH A COMPUTER - 10 years)
> "It's important to remember that the American people do find this to be a very, very sensitive topic."
I think the media finds this to be an eyeball-grabbing topic, but AFAICT, the American people do not care much about it.
A)Conducted by United States Army or the United States Air Force, and
B)Conducted to enforce domestic policies within the United States
Would be in violation of the Posse Comitatus Act. However, there is disagreement over whether this language may apply to troops used in an advisory, support, disaster response, or other homeland defense role, as opposed to domestic law enforcement.
"The Pentagon has publicly posted at least a partial list of the drone missions that have flown in non-military airspace over the United States and explains the use of the aircraft. The site lists nine missions flown between 2011 and 2016, largely to assist with search and rescue, floods, fires or National Guard exercises.
"A senior policy analyst for the ACLU, Jay Stanley, said it is good news no legal violations were found, yet the technology is so advanced that it's possible laws may require revision."
This sounds a lot less dramatic than the article headline, but it's good that this is being reported and discussed publicly.
It looked like a giant model aircraft getting launched when the wind hit it. Pretty cool at the time, although after reading this, I hope it was just a training mission over some non-existent Nevada AFB...
Gee, what oversight. I'm sure they'll be denying approvals left and right.
One case in which an unnamed mayor asked the Marine Corps to use a drone to find potholes in the mayor's city.
/signed by the rest of the world.
Investors can't just sit on money. They have to get returns, and that means that they have to put capital to work.
The point of funding should really be to enable faster growth than they might otherwise have been able to achieve, but if a business can't at least survive without huge influxes of investments then is it really a business that they should be investing in in the first place?
In practice, I'm guessing if the LPs get cold feet, then VCs will be forced to triage their funding decisions accordingly. How much this matters given the sheer size of some funds, I'm not sure.
We are currently in a state of "big ball of plugins and configuration". A bunch of plugins have been installed, and lots of manual configuration has been put into jobs so that everybody has what they need to build their software. It has led to Jenkins being a "do everything" workflow system. The easy path that Jenkins provides, to me, seems like the wrong one - it makes it easy to just stuff everything in there because it "can" do it. This seems to leads to tons of copy/paste, drift, all types of different work being represented, and it is starting to become unmanageable.
Have others seen this happen when using Jenkins? How have you dealt with it?
Anything interesting deployed in the last hour?
Something in the CI/CD tool chain, Spinnaker, failed for it to move all the way to Live without being caught.
I absolutely love that. I'm a huge fan of what Hastings and company have done over there in terms of culture and making Netflix a unique and desirable place to work.
I think it's time for another round of "find a way to make Netflix hire me."
The Internet is the largest information system in the world, and Google is the primary portal into that information system. Google's "organic" results are accompanied by AdWords results, which are based on a mixture of bid price and relevance. These ads are marked with a small "Ad" label that many people miss, and even those who know they're ads can't really "unsee" those results.
So, searching the world's largest information system provides results which have been biased by money. How does anyone consider this ethical? Why are we letting money influence the salience of information?
What if your local library (you know, those old things) had a card catalog with "sponsored" results? If this already exists, then maybe we're already lost. But it seems to me that as a basic rule of information ethics, the salience of information in a given information system should not be biased by monetary influence. Full stop, the end, no exceptions. If anyone has a counterargument, I would honestly love to hear it, because this has nagged at me for a long time. I simply can't understand how AdWords is ethical.
We will be able to discuss, collaborate, create. We would be able to watch any film, listen to any song.
Some people called us "geeks", we liked to call ourselves "hackers".
We are not just hacking code, we're hacking a new world.
A quarter of a century later and most of those things are now reality. But somehow these great things have brought with them some hidden things. Things which we ignored or brushed off easily back in the day..
Like the fact that the Internet is now populated by the same demographic as the real world, not just hackers and dreamers. Now everyone is online.
We thought it would free us from oppression, but it is becoming the ultimate tool for oppression.
We thought it would give us true democracy, but it is becoming the ultimate moderation system for "foreign" though suppression and group think generator.
We though it will serve our needs, but it is becoming the thing that is telling us what to need.We thought it will satisfy our tastes, but our tastes are now being programed into us by it.
Of course we're still high from all the positive aspects and it's not in our nature to be scared of things, but that will soon wear off .. And when we wake up what will we find there ?
Either way, it is unstoppable and nobody can turn it off. So we only have to wait and see what it will ultimately turn into.
What will it be 25 years from now ? Will we still be able to discuss about this freely ?
I would go even farther: I'm not particularly worried about individual interests at all, on any subject. The Internet is very good at exposing them.
I am much more concerned with bad classes of actors than bad actors. We see many ways in which competition breaks down because entire classes of people benefit from working in synchrony. The classic example is politicians: crooked elections mean longer terms which benefits basically all of them.
The other classic example is the capital class. If everyone in the capital class plays by he rules of property then they can exploit the labor class. Once you're in the capital class there are few reasons to compete with private property. Social pressure mostly neuters whatever capital class activists might try to keep working.
It's these class barriers that we should be worried about. But new weapons (like search engines) and new villains (like Islam) make much better news stories.
I dug into his CV and found the following related works:
- recent publications: http://aibrt.org/index.php/internet-studies
- The search engine manipulation effect (SEME) and itspossible impact on the outcomes of elections [http://aibrt.org/downloads/EPSTEIN_&_ROBERTSON_2015-The_Sear...]
A talk that he gave at Stanford about SEME: https://www.youtube.com/watch?v=TSN6LE06J54&feature=youtu.be
- Democracy At Risk Manipulating Search Rankings Can ShiftVoters Preferences SubstantiallyWithout Their Awareness:[http://aibrt.org/downloads/EPSTEIN_and_Robertson_2013-Democr...]
The basic argument is search engine rank determines trustworthiness of a source. This influences people's opinions on politics, what they buy, what they think, etc.
This is absolutely true, and the core of their research (it seems).
But then it goes into FUD territory when talking about Google backing Hillary. Hillary and Trump have received the lion's share of attention in media, social media, and such. Google searches SHOULD show them prominently.
Worse, the article basically finishes up with a "be afraid, be very afraid" approach that rankles me. "The new hidden persuaders are bigger, bolder and badder than anything Vance Packard ever envisioned. If we choose to ignore this, we do so at our peril."
No solutions or deeper analysis. No discussions on how a search engine should rank relevancy to search terms.
I personally have no doubt that mass-media, marketing, and the internet are shapers of opinions. Bias in the media, search engines, and such is a complex topic. Not something that should boil down to "Google could make it so Hillary wins" therefore you should be afraid.
I can't see how this is valid at all. We know that polls give biased results unless you are very careful with the sampling. Here people are self-selecting for a poll (eg via mechanical turk). Then you apply a highly contrived scenario that they are googling about a candidate. Then you ask them a bunch of questions, immediately, and proceed to draw wide ranging conclusions designed to increase your self-importance as much as possible. I mean, seriously, it's worse than useless.
This article is also written as if these findings are earth shattering. After conducting a small, biased, invalid study (Asking people in San Diego about an Australian election? How does that generalize to anything?) and finding a large effect Epstein says 'We did not immediately uncork the Champagne bottle'. Is that how psychology research is conducted? Researchers toasting large implausible effects in small biased samples that have no external validity?
Google Now currently displays cards to remind people to vote on voting day. Maybe it just happens to be more likely to show up for people that have been profiled as likely to vote for Google's favored candidate.
In it, the author explains that there are two types of surveillance cities that will emerge in the future: one where every park bench is rigged with a mic, every street corner has a camera aimed at it, and where all the data collected is funneled to law enforcement agencies; if you were mugged on some street corner, they'd be able to react to the crime swiftly and with high accuracy.
The other city is exactly the same, but all the data is made available to all citizens through an open API; so if you wanted to meet with someone on some street corner, you could decide for yourself if it was safe enough to visit, likely preventing the crime from happening at all.
Does anyone know what article I'm talking about?
I'd hate to think Twitter decided it was for the public good that everyone read what Romney had to say about Trump.
Reminds me of http://slatestarcodex.com/2014/09/24/streetlight-psychology/
>And in 2015, a team of researchers from the University of Maryland and elsewhere showed that Googles search results routinely favoured Democratic candidates. Are Googles search rankings really biased?
A greater portion of liberals use social media than conservatives (source: http://www.pewinternet.org/2012/03/12/main-findings-10/) Maybe they organically generate more links?
I would argue that the influence of media has always been this powerful. And media has always been biased.
Another angle to look at would be to apply the work of Stanley Milgram re: obedience to authority figures. Our ability to think for ourselves has some evolving to do...
What a quaint boogey man this "Search Engine Manipulation Effect" is. It even has it's own obscure little acronym, SEME, to appear more relevant.
I learned about the subtle effects of advertising by the time I was in fourth grade, and certainly understood how to ignore them by middle school.
Back when special holographic foil comic book covers and trading cards were new, I had already figured out that all of these "collectibles" were mass-produced, and would never wind up as valuable as, say Action Comics Issue#1, despite so many claims otherwise. This was something you could kind of figure out on your own. If your were easily amused by shiny objects though, you might not arrive at the same conclusion.
Meanwhile anyone could figure out that the influence of single frame inserts in movies was as potent and realistic as the subliminal messaging in John Carpenter's Sci-fi movie, THEY LIVE.
So too, with Search Engines.
Figure if a fourth grader can figure out the shenanigans of opinion and belief influence in advertising, and unravel the bullshit of religion before high school ends, then this other newer form of bullshit is similarly debunked by comparable intellects. If you're so stupid that you buy into bullshit, without multiple channels of factual verification, you're your own worst enemy.
Okay, okay, maybe this is good reading material for an elementary school classroom assignment, focused on current events. Sure, why not?
I was hoping this would be about technological manifestations of psychic telepathy through malicious use of functional MRI systems.
I've seen a number of newer HN account flooding the site with articles... presumably for eyeballs + ad revenue but hey, maybe they just want another link to lose attention..
I also doubt the results of their research. Nobody is going to vote for Donald Trump because he happens to appear first in a google search; that's just retarded. I think the fact that outsider candidates are locked out of legacy media megaphones and party power structures seems more harmful to democracy, and this has been accepted as "just how it is" for decades.
Does this "search engine manipulation effect" have an impact on top of the ballot votes? We still don't know. Does it have an impact on everyone else on your ballot? Nope.
Disclosure... I am the founder of a company that builds a tool for organizations to blatantly tell people who to vote for...
I swear, sometimes I think the world is inhabited by p-zombies, who don't actually think things through, but just mindlessly recombine previously consumed memes into (slightly) novel variants.
Was this written by a second grader?
On the other hand, our susceptibility to having our political system be disproportionately affected by a company or two with a top-down chain of command is a reflection that our system of representative democracy has weak links and can be easily subverted.
I wrote about the solution to this a while back: replace voting with polling! Have people cast their voice for POLICIES not REPRESENTATIVES. It is much more costly to fool all the people all the time, than to fool them at election time, and then go on to lobby the representatives they chose.
Voting depends on turnout, which skews the results and is susceptible to sybil attacks (remember facebook's vote about the newsfeed that got 3% turnout?)
Polling doesn't. It can be refined using better and better statisical techniques. We can gradually replace costly and stupid elections where candidates talk about their penis, with polling of the population on issues like gun control etc. Replace the bickering lawmakers and filibusters with polling and threshholds.
It allows the kernel's Layer-2 and Layer-3 switching/routing configuration to be reflected down into the switch offload hardware, and the switch's ARP and MAC table data to be reflected back up to the kernel stack.
The overall idea being you can continue to use the same userspace tools to configure the routing/switching, and it all just magically goes faster if you have supported switching hardware.
> Q. Is SONiC a Linux distribution?
> A. No, SONiC is a collection of networking software components required to have a fully functional L3 device that can be agnostic of any particular Linux distribution. Today SONiC runs on Debian
I'm starting to believe that developers choose OS/Tools the are used to (Linux in this case) versus the one best suited for the job (BSD)
Wait. systemd, kdbus, GNOME and systemd-udevd. Shit.
We have met the enemy, and befriended it. Now we are the enemy.
When Microsoft put Nadella in charge, they made a great decision. And I honestly don't say that very often about top level management.
Now if only we had descriptions of chemistry that were this terse. Imagine if this kind of problem solving, collaboration, simulation, and instant verification were the norm for synthetic chem. One of the comments -- "[Let's] use gencols to rub the ship against gliders and *WSSs to see whether there is a useful collision to maybe build a puffer" -- just blew me away. If this were chemistry, that commentator would have been suggesting automatic nanomachine factory discovery.
(InChI appears to be close. But vast amounts of data are locked up in obtuse formats are either Assigned-Names-And-Numbers style formats which are useless to indexing and similarity searches, or formats that embed non-relative coordinates in 3d space, etc, in such a way that computing a deterministic ID for sharing is practically a nonstarter.)
You might be interested in a simple proof I found of why c/2 and c/3 are speed limits for orthogonal and diagonal spaceships respectively.
Definition: In a gameplay of life, an "infinite lifeline" is a sequence of pairs (c_i,n_i) such that each c_i is alive in generation n_i and either c_(i+1)=c_i or c_(i+1) is adjacent to c_i.
Lemma ("Two Forbidden Directions"): Let x,y be any two 'forbidden' directions from among N,S,E,W,NE,NW,SE,SW. In any gameplay of life that starts finite and doesn't die out, there is an infinite lifeline that never goes in either direction x or y.
The lemma's proof uses biology. Say that (c,n) is a "father" of (c',n+1) if c' is the cell adjacent to c in direction x or y. Otherwise, (c,d) is a "mother" of (c',n+1). By the rules of the game of life it's easy to show every living (c,n+1) has at least one living father and at least one living mother. It follows (modulo some more details) that since the gameplay doesn't die out, there must be an infinite lifeline where each cell is a mother of the next, i.e., an infinite lifeline that never goes direction x or y.
Proof of c/2 orthogonal speed limit: If a spaceship went faster than c/2, say, northward, by the lemma, it would have an infinite lifeline that never goes N or NE. The only way it could ever go northward would be to go NW. Every NW step would have to be balanced out by an eastward step (of which NE is forbidden) or the spaceship would drift west. So every northward step requires a non-northward step, QED.
Proof of c/3 speed limit for diagonal: A diagonal spaceship faster than c/3, say, northeastward, would have an infinite lifeline that never goes N or NE. The only way for it to go northward would be to go NW. Each NW step would need at least two eastward steps in order for the ship to go eastward, QED.
This link has an animation of the c/10 spaceship.
BTW RTVS was built by the same group that made PTVS (Python Tools for VS) and NTVS (Node.js Tools for VS). RTVS will also be free & open source of course.
One thing I'm missing from my workflow would be a way to integrate in to an IDE so I just push a button, and it'll commit the code to a gist, and push the output to a OneNote for other people to comment on. I'm wondering if it would be possible to fork this, and tweak the calls to knitr so they use my library instead.
And I usually work from Linux, so it will be in a VM. But I'll try :)
http://adv-r.had.co.nz - Advanced R by Hadley Wickham
I keep my links on Github:https://github.com/melling/ComputerLanguages/blob/master/r.o...
There's a significant mindshare in Linux and Open Source. That being said, I don't understand why MS didn't provide Visual Studio, Office, and similar for Linux at a premium. For example, if Office was $499 for Windows, charge $999 for Linux. That way, they get the best of both worlds (use their software, pay them money). And their mindshare is significant as well, and this would increase it.
Maybe finally they are coming to their senses, doing just this. It's about time.
Addendum: Clearly the hive mind MS corporate drones are out in force today/tonight. I know everybody is enamoured with MS-Eclipse etc but R is best used unfiltered. Not hijacked into the laughable world of Windows and Visual Studio. I know from bitter experience that the Windows versions of R are terribly unstable by comparison with the Linux builds. I learned the latter precisely for that reason. MS is playing a fantastic marketing game but I was agnostic on platform until R on Windows started showing its catastrophic limitations. It's a second class citizen as soon as you venture beyond the basics. Take it from an ex R-on-Windows guy who uses R 10 hours per day.
In short: I'm the most vanilla, square, anti-drug person you can find. I don't want to use them, and I think other people would be better off if they reduced their usage as well.
Yet I can not for the life of me understand why drugs are illegal. Not just pot, all drugs. I'm totally onboard with making it our public policy that we want to reduce the use of drugs. That makes perfect sense to me. It does not make sense to me why anyone still believes that using the criminal justice system as the mechanism for getting to that goal is the right path. We are spending insane amounts of money on a failed approach while also generating huge negative side effects by creating an enormous group of people with criminal records. It's probably the worst thing this country has done to our own people since segregation and it seems like all of the policy people understand this. Why can't we get political will to do something different?
The issue is that many opioids and non opioids gap between the therapeutic dose range and LD50% is often dangerously narrow.
Complication #0: serum bioavailable molecule assay is rarely monitored. People metabolize and clear drugs at vastly differently rates.
Complication #1: Hospital mistakes still happen quite frequently, despite many measures to prevent them, especially with inexperienced and overworked nurses/assistants.
Complication #2: cummulative dosing error or interactions, especially multiple, independent prescriptions for similar opioids with different administration routes (patches, sprays, pills, injections)
Complication #3: overprescription of opioids because they're cheap, especially to veterans, which also leads to prescription and hard drug addictions.
Solution: opiods need to be singularly controlled at home or in the hospital by an integrated, blood/interstitial fluid measuring/dispensing unit to avoid OD and push back on abuse.
Plus, anyone taking opioids should also have narcan or equivalent antidote readily available, and wear a medalert QR code bracelet which lists relevant conditions and medications should they be found unresponsive.
Finally, avoid painkillers as much as possible and take the least dose which reduces stress level.
As I grew into adulthood, I knew the pains I experienced were directly related to my condition, and it was my desire to not really 'cloak' the pain, but avoid it in the first place. Preventive if you will. It helps, but it's clear to me that I wanted to be healthy, and if I have to occasionally take something, so be it. Naproxen sodium has worked quite well of late.
The point of all this rambling is that I simply don't want the hassle of becoming addicted to pain pills. Or sleeping pills. Or nasal spray when it's allergy season. I've lived with pain so long for my life that I'm kind of used to it, and I do say so as a point of pride. It's the body I was born with and it's the one I'll have to use for this gig, take care of it.
I don't fault people for wanting pain treatment. I think the way the system was set up with pills flooding the US was incredibly destructive, and highly indicative of the dangers of for-profit medicine as a system. Toss in the DEA's drug laws and it just turns patients into criminals and that benefits only a very limited group.
When I eventually started seeing commercials on TV for a treatment for opiod induced constipation all I could think about was Trainspotting and that we have a real, genuine problem on our hands in the US.
Pain is self-report, so all a doctor can do is prescribe based on patient demand. Maybe they can't identify an underlying cause, or maybe the treatment (ex: back surgery) is too risky.
Spine surgery that might not work and can leave you with say, loss of bladder control? I'd take the pill every time, and if my doctor didn't just hand it over, I'd find another doctor.
However I've been in decent pain for 1 year+ before so I know what its like & can totally understand why people go for the powerful stuff. Continuous pain like that slowly but surely grinds your psyche to fine dust over the long run. Thats the part that people without chronic pain miss...
While unethical prescribers (not all are physicians) contribute significantly to rising misuse of opioids, the vast majority of practitioners want to do what's best for patients. As the article notes, there are few options for managing chronic pain, leaving opioids the only realistic choice in many instances.
None of the providers I know think opioids are preferable, but more like a necessary evil. They prescribe opioids sparingly, reluctantly, diligently. Patients have told me it's become increasingly difficult to get prescriptions for quite modest doses of opioid agents they've used for years without dose escalation. The tendency to throw babies out with the bathwater is not unique to this situation, but no less problematic.
Blaming pharmaceutical companies doesn't seem a constructive approach. Probably there's a lot of R&D going on in this domain without much success, meaning it's a very hard problem to solve. I'm certain that a major breakthrough would be eagerly marketed, highly likely the profit margins would be huge. Meanwhile, we're left with the status quo, and manufacturers are meeting market demands. Isn't that how our economy works? Pharma sales are already more highly regulated than nearly all other industries, what more should be done?
Few legal drugs are as controlled in the US as schedule II opioids. If there were no such controls it's likely that the number of overdose-related deaths would be higher than it is. No one knows what solution will work, the need to be being careful about changing "rules" should be obvious.
The article's advocacy of "medical marijuana" as an alternative is IMO inappropriate. Simply enough, research on the uses of cannabis components for pain treatment is in very early stages. Specific indications and side-effect risks are inadequately understood. Recommending use of these components as treatments for pain is premature.
While the title mentions heroin, the article at least mentions that deaths are frequently due to more deadly prescription painkillers being mixed in. One thing I wonder that I haven't seen addressed (I'm not sure if there is even data available) is how many overdose deaths are due to use of multiple drugs at the same time (alcohol for example makes many drugs more deadly).
Hopefully there will be more and better reporting on the issue. IIRC (and wikipedia agrees at least), these numbers mean that drug overdoes are now killing non-trivially more people in the US than car accidents.
It has been proven time and time again that systematically removing "common sense"  regulations only harms society in the long run.
 Please don't start a mundane discussion about what "common sense" means.
Does the US just over prescribe painkillers, meaning more flood to the blackmarket?
Is it people are getting it from the Dr and accidentally ODing?
Are the Drs prescribing without care, so those who want the drug for a high and no medical reason can?
I never knew painkillers to be used as party drug / fun drug in the UK (outside of the heroin using demographic) nor ever heard of some one ODing on prescribed painkillers.
Seems strange it is such a big issue in the US
Vox, stop with the hyperbola.
Granted, I use them somewhat occasionally (as needed) for pain, but they don't really cause in me the compulsive, addictive behavior I've read about. My internet addiction (HN included) is far worse than any chemical substance I've ever used.
The problem isn't that we don't have solutions. Solutions are a plenty. The problem is that no one in America cares. No one in this country gives a shit that people are dying. Most people want it to happen. They support the fucking drug war. They want people to die. Until this fucking shit changes, people will continue to die and idiots will continue to wonder what can we do? So many fucking things, I don't even have time to write them all down. That's the fucking sad part.
Not saying of course that everyone who gets these doesn't need them, I'm sure many do, but we have something like 90% of the worldwide consumption occurring in the States, so something is clearly up.
Opium having the reputation to make people amorph losing their will to rebel.
The new trend is opioid are now cheap and not prescribed to the rich but the poor.
Religion used to the opium of the people they said, and now that opium is cheap, religion is not needed anymore to make people servile.
I love this new era of progress.
Tomorrow we make an application to help parents poor sell their kids body part on the internet for the cure of richer people?
I mean, let's try to make even more dystopic. We can do better. That is what progress is. Making system more efficient.
If illegal drugs were all made legal tomorrow, we would see something similar.
The down-side is that a consumer has to run a full-blown eval() (as opposed to the more restricted JSON.parse()). This isn't that much of a downside in a typical webapp since you have full control over the browser process anyway, but it's deadly for cross-domain.
The upside is considerable for certain data-structures that are hard to represent in JSON efficiently, e.g. with a lot of denormalization.
A key concern for me is runtime efficiency, particularly compared to JSON.stringify.
As for the deployment patterns. If you're in the cloud then you should be baking AMIs (or equivalent in your cloud provider) and shipping your configuration the same way you ship your application code, as native packages like .deb or .rpm. If you jumped on the docker bandwagon then your hosts are basically there to look pretty and host the containers which means you have some other way of getting configuration to your servers, i.e. etcd, consul, etc. so the problems brought up in this post don't exist in that setting. You are also probably using some kind of container orchestration system like kubernetes so again the problem of orchestration and deployment is offloaded to some other system. The only problem you have in that setting is doing a rolling deploy of containers and halting when things go wrong.
I think the only place any of these tools make sense now is some private on-premise cloud. Ever other place has already moved on.
- Running a job against a single host will finish in 3 minutes... running that exact same job against thousands will take well over an hour and max out your machine.
- Running against more than around 3k hosts will somehow consume all 60GB of RAM and trigger the oom-killer
- CPU usage on the ansible runner is absurd for a large amount of hosts. We're currently using a c4.8xlarge (our biggest box) just to run deploy jobs and have them finish in a reasonable amount of time (10-15 minutes)
Slicing up our inventory into chunks and running them on different servers sucks big time and is pretty hacky. How do I combine the results? Can't do orchestration like "Run X on these roles first, then run Y on these roles when you're done".
Most likely what I'm going to do is have a single server execute ansible doing only the following in async (aka CPU friendly) mode:
- Upload a current copy of ansible to S3
- Upload the configs to the target machines with ONLY the secrets that role needs in plain text. (I'm not putting my vault secret on every box!)
- Have the servers pull it down and execute in --connection=local mode.
- Wait until each remote finishes
All that said, I LOVE LOVE writing stuff in Ansible. It is so easy to read, follow, and understand. I picked up most of it in a day or two just by reading their "Best Practices" page. Getting it to work at scale hurts though :(
I could see this if you're working from one really powerful machine... no, that won't work, it's constrained by SSH, not hardware specs.
I could see this if you're calling Ansible on another host... no, then you have to copy everything out to the sub hosts, who have to copy everything out to their sub hosts... A scalability nightmare.
You can use redis as a distributed store of truth and... wait, what? Now there's a blog post worth reading. Show us how to scale Ansible with real-world examples using redis and autoscaling groups. Please.
Got it. I can see this work reasonably well if you're willing to wait 10 hours for a deploy to complete. Personally, I'm not.
> If you want to have 1000 forks, that will cost about 30 GB of memory
Ansible is not, nor has it ever been, limited by available memory. It's limited by the number of concurrent SSH sessions it can handle while copying every single module to be executed to that host.
There's plenty of reasons and ways to use Ansible for deploying code. Some of the post has accurate and reasonable information, but the scaling portion is pure fantasy right now.
I've been toying with the idea of making a trolling-but-no-really deployment framework called tarpipe, and all it does is take some files on your host, get em to a $place in one step, and run hook.sh. Oooooptionally, do some dir moves and symlnks to keep a prior state backed up, and service stop|start on either side of the mv/ln, to minimize downtime.
Usage could be `tarpipe ssh user@host` or `tarpipe <(echo "cd keke && bash -c")` just as easily.
It goes without saying that this simply wouldn't be comparable to Ansible, Chef, or other CM because it's too simple. It doesn't help manage state if it escapes $cwd. But if your application can curb its enthusiasm to a directory... boy is it simple if that's true.
I already do this on the daily to crashland my bashrc and dotfiles on any new remote host. Maybe this kind of explicitly zero-dep deploy would be useful for more situations.
Would anyone have a use for that?
EDIT: What the heck: I prototyped it: https://gist.github.com/heavenlyhash/b575092aa84ce9f3e1d2
Ansible is easy to reason about - it's never surprised me once in use. You have about an order of magnitude less to learn when compared with chef or bconfig.
Also, for setups with small target VM's, it's increadbly handy to not have to install a bunch of stuff on each server and make sure it doesn't conflict with anything else.
But mostly it's that Ansible can be understood enough without devoting a couple weeks of your life to it. And you can come back later and understand what you have written.
- Super simple declarative yaml configs
- Agentless. You needed to have SSH working anyway so Ansible just uses that. With ssh pipelining it's so fast.
- The community support is huge and extensive.
- They have a module for everything and development is constant and active, much of which from the community.
- Hardware and networking equipment can be provisioned just the same as a VM or OS image.
The list goes on. Definitely give Ansible a try.
Maintenance:Sure, chef has a server component. So does ansible, if you're using it the way he suggests (with a host periodically running ansible playbooks on all hosts). Ansible has no client component to upgrade, though, so that's a win, right? It totally is, until Ansible doesn't work on a host and you can't figure out why, and the error logs you get are useless because some of Ansible's many assumptions about what the host's initial state is are incorrect. Chef can be managed by a standard package manager, which costs nothing on the client side, and allows far, far better assumptions to be made.
For the record, I eventually gave up on the chef server, replicated the playbooks to each machine (using a cronjob and git), and chef-solo.
Speed:Ansible pipelining speeds it up, significantly. You can almost get one command a second! Chef runs on host. It is ruby, and goes slow, but I have programmed a lot of chef and run a lot of Ansible, and my average chef run was under 30 seconds, and I've yet to have Ansible run any playbook in under a minute. Some of this is from atrocious default behavior, like requiring all hosts to complete a step before moving on to the next step on any hosts, or the fact that it spends nearly 10 seconds of cpu time on each machine 'gathering facts' at the beginning of it's playbooks, even if none of those facts are ever used.
Fact caching:This is a solution to the aforementioned problems with Ansible. It may make sense in the chef-server context, but I don't have a whole lot of experience with it.
Tags:This is probably a matter of personal preference, but I prefer to give the set of things that need to be done, and have the tree descend downwards based on dependencies from there. The author clearly prefers to specify with tasks when they should happen, and for each host a set of initial circumstances. This one can be argued til you're blue in the face. I make the point that there's a clear tree that can be built of dependencies under my scheme.
Push vs Pull:There's no maintenance cost to upgrading? What is this dude on? When you change Ansible revisions, you have to do just as much work adapting as from chef revision to chef revision. Ansible has always been highly in flux, and not great about not changing default behavior.
Pulls still have to be triggered, but they can (and should) be triggered on-host, in a cronjob. Your monitoring system should alert you when the chef run is out of date, though, honestly, if it is failing on just some of your hosts you need to clean up and unify your infrastructure.
Raw numbers:Ansible scales one large machine. Chef costs you a tiny amount on each machine. One of these scales. One of these does not.
Search and inventory:Oh gods, if you're using Ansible for inventory managment, please don't. If you're using chef for inventory management, please don't. Neither are reasonable tools for the job.
Orchestration:Neither chef nor ansible are appropriate tools for dealing with your application's data model. Full stop. Actually, full stop. There's nothing else of value further down this article. Please don't take any of it's advice.
One of the obnoxious things about SEO is that if one person is doing it everybody has to do it. It's not necessarily enough to simply offer a better product at a better price. Luckily Google does try to reduce the effect of SEO. I notice for instance that StackExchange almost always beats out Expert Sex Change links these days.
Raise your hand if you want to go back to AltaVista/AskJeeves.
But this article is so far up its own ass.
> Ever gotten a crappy email asking for links? Blame PageRank.
Never mind that web rings were around long before Google and used the same tactics.
> Ever had garbage comments with link drops? Blame PageRank.
There are way more reasons spammers exist than just boosting PageRank.
The author is acting like a) Google had less of an influence on the web before PageRank was public information and b) the web was somehow better both back then and before Google existed. There will always be people who want to game search engine results, regardless of how much information they know about their own standing, and the web was pretty much un-navigable pre-Google.
Back in 2003 I wrote:
"PageRank stopped working really well when people began to understand how PageRank worked. The act of Google trying to "understand" the web caused the web itself to change."
It's amazing that it took this long.
And, the solution looks roughly like "weigh established authority to the point where it trumps relevance".
(I still offer Ad Limiter if you'd like to trim Google's in-house search result content down to a manageable level.)
We get pagerank SEO spam from time to time, and it's pretty annoying. I have the tools to take care of it within 5 minutes every day, but I do worry that if we grow to a certain point it may no longer be possible for me to handle the problem alone.
I'm sure many other sites have similar problems with comment spam, and I'd love to hear some advice on how to deal with this from sites that have the same problem.
Right now our main lines of defense are a recaptcha (our last remaining third party embed, ironically sending user data to Google I'd rather not send to deal with a problem Google largely created), and a daily update of an IP blacklist we get from Stop Forum Spam.
I tried to do some Bayesian classification, but didn't make much progress unfortunately. And nofollow really isn't an option for me, as it would involve me manipulating other people's web sites and I don't want to do that.
Edit: Changed "on airbnb" to "about airbnb"
You can still infer the approximate rank of a page by where it places relative to other pages, when searching for relevant keywords. Someone wanting to place ahead of the competition still has a function for measuring how well they are doing in SEO.
Therefore, the data is no longer open and power is now more concentrated: Those who know someone at Google can find out their page rank score; the 99.999...% of the rest of the world cannot.
Every system can be gamed. Every system where money can be made WILL be gamed. It's a predator-prey relationship.
The way this article was written made it sound like Google Search was a bane when it arrived. And sure, it was the worst Search Engine at the time, except for all the others that had been invented up until then.
>This page uses a plugin that is not supported
I guess my Chrome doesn't do .mov files in embed tags.
Well anyway, as far as embodied cognition goes, I believe that, in a certain sense, a star is part of my body while I'm looking at it. Post-it notes are part of our memory.
Seems interesting. Reminds me of the glasses that flip your vision upside down until your brain flips it back right side up and then when you take the glasses off you see upside down with no glasses. And also of blind people echolocating.
The 19 inch rack is one of the oldest standards in computing. ENIAC used 19 inch racks. Open Compute, though, uses wider, and metric, racks. 19 inch rack gear can be mounted in Open Rack with an adapter.
The Open Compute spec says that shelves of IT gear are provided with 12 VDC power. There's power conversion in the base of the rack. Facebook has standards for distribution to the racks at 54VDC, 277VAC 3, and 230 VAC (Eurovolt). Apparently Google wants to add 48VDC, which was the old standard for telephone central offices.
Facebook's choice of 54VDC distribution is strange. Anyone know why they picked that number?
My pet issue w/IT infrastructure is the management modules. Finding a server w/a management module that works everytime is nigh impossible. Do google and Facebook design their own or do they somehow just work around their quirks?
I ask every time, and this project is amazing, but, it feels just for the big guys!
Great to see that they are actively aware of CA monopolization, and taking steps to avoid becoming one themselves.
>Improved Java 8 language support - Were excited to bring Java 8 language features to Android. With Android's Jack compiler, you can now use many popular Java 8 language features, including lambdas and more, on Android versions as far back as Gingerbread. The new features help reduce boilerplate code. For example, lambdas can replace anonymous inner classes when providing event listeners. Some Java 8 language features --like default and static methods, streams, and functional interfaces -- are also now available on N and above. With Jack, were looking forward to tracking the Java language more closely while maintaining backward compatibility.
ART will recompile applications based on profiling data.
Introduction of support to hardware keystores, with the mention that one use case is to prevent jailbreaking.
Prevent the NDK users that ignored the documentation and linked to non official platform libraries to keep doing that.
It would be absolutely amazing if Google came out with a mechanism for building native apps in, say, Rust.
I'm just a hobby developer, but the last four-ish months have been really exciting. There's a been ton released and polished in the developer console alone.
Blacklist input validation as defense against XSS? Are you kidding me? And then over to session fixation, where I see the exact same ?jessionid=blah example that has been in any Web Security book for the last 10-15 years? Come on!
Joined with Github, went through the password handling section, then saw this:
No no no no NO! Do NOT use SHA256 for passwords.
PBKDF2-SHA256 with 100k or more iterations? Okay, fine.
SHA256 the cryptographic hash function not designed for password storage? Bad advice.
Fine-tuning these models for different applications has been a great way for me to build out new things without relying on an enormous fleet of K40s to train a new set from scratch. Lots of progress in this field, thanks to the whole team for releasing this.
I'm super impressed by what's coming out of Google's TensorFlow. Their ImageNet InceptionV3 model is a delight to play with in python!
Here is my project with TensorFlow inception model.https://github.com/AKSHAYUBHAT/VisualSearchServer
Turns out that the first thing you need to do is figure out if FB is the right channel for you. I found out that on FB, anyone will download anything that looks interesting and you can optimize your CPI fairly easily. But if you count on people spending money via in app purchases, the typical rules (1%-5% of active users) don't always apply for apps of different genres.
So many companies launch FB ads without proper tracking and then are surprised when they have no idea what it did for them. FB tends to group everything under the sun as "engagement" and "conversions", so really digging in and understanding those settings is key.
For example, 1-day view-through credit by default is probably a bad idea for many advertisers, particularly when you have no clue what the quality of a view-through is, and what they are worth to you. They are VERY different in terms of value, but FB wants to give them 100% credit with their rules within 1-day. That's simply not how most savvy people approach attribution.
Google Analytics offers some great basic attribution tools out of the box that let you experiment and compare different static models, or create your own static model. Ultimately static models themselves have inherent limitations because attribution is a much more dynamic thing that exists at the individual user path level, but it is a great start.
Lookalikes almost always outperform interest targeting.
Edit: I also wonder what would happen if I uploaded just my own email address? Would it find people very similar to me?
What's the best platform to advertise enterprise software on? I think IP level targeting is a good strategy, but curious to hear about other ideas.
Not necessarily true. OSX, Oracle, MS etc.. are used by many and are under constant stress test.
"As soon as you get an idea you try to defeat it. Youll be able to generate more ideas because you freed up mental space."
These are good ideas, both from coding as well as entrepreneurial perspectives. Every failure you encounter, if taken as a learning lesson, will make you or your ideas stronger.
I wrote a more comprehensive essay on ideas here: http://brianknapp.me/books/creative-pursuit/chapter-7/
> And we will continue to press our partners to allow digital information to cross borders unimpeded. We are working to preserve a single, global Internet, not a Balkanized Internet defined by barriers that would have the effect of limiting the free flow of information and create new opportunities for censorship.
This is technically correct. It doesn't create new opportunities for censorship. These opportunities always existed.
> The TPP illustrates these shortcomings well. Its free flow of information rules would only be enforced for foreign enterprises, and only those entities based out of countries that have signed the TPP. So if a country were to enact a law banning some type of online content, the TPP's free flow of information rules would do nothing to prevent the enforcement of that censorship against websites or platforms that are locally-owned in that country.
Yes and nothing prevents that already. The TPP doesn't alter the sovereignty equation, it is a trade treaty.
The TPP makes things worse but such weak attacks against statements that are technically correct are largely ineffective.
Posts like these are much more effective as they show legitimate problems that could have been resolved favorably (for the general population) in a trade treaty:
> Its free flow of information rules would only be enforced for foreign enterprises, and only those entities based out of countries that have signed the TPP.
Yes, because it's an international trade agreement.
1. control+g (Goto line number)2&3. 11 (line 11)4. enter5. cmd+shift+down6. cmd+x7. backspace (delete trailing line)8. cmd+shift+up9. cmd+shift+l (multiple cursor mode)10. right arrow (line up cursors)11. tab12. cmd+v
Coincidentally (or not), re: the challenge in the above post, I wrote a command that solves exactly this problem: yank-interleaved. https://www.emacswiki.org/emacs/KillingAndYanking#toc3
How many keystrokes does it take to install and use that function?
Emacs and vim have been battling it out for 40 years. There's probably room to improve the typing efficiency of both.
Emacs has ergoemacs and god-mode, for example.
Plus you can use vim key bindings.
Is a modal editor better? Should key bindings be built around a more efficient keyboard layout, like Programmer Dvorak, for example?