e: Addressing common replies:
"This is for people commuting already" -- okay, point taken; my point about Uber/Lyft still stands.
"Tipping isn't obligatory" -- yes, it kind of is. Uber used to bar drivers form asking, but they recently lost a lawsuit over that rule and so now Uber drivers will occasionally ask for tips (which will cause it to slowly become the norm). When tipping becomes the norm, the low-base-wage of the driver becomes less of an 'issue', and then tipping becomes even more of a necessity as that is where the drivers will make their actual margins.
I wonder if this will make it harder for GV to participate in any sort of funding rounds.
 "Google Ventures invested $258M at $3.7B post-money valuation in 2013" -- https://www.quora.com/What-percentage-of-Uber-does-Google-ow...
Can I tell uber I want to share a ride to the airport with any stranger? (my taxi co will do that)?
Umm, who reading this article doesn't know who Google is? That construction is almost always there to let you know who some no-name subsidiary or division of a much more well-known company is.
In this case it functions in the reverse if at all--reminding people that Alphabet is a thing, in case you didn't know.
Anyway, I just think that's funny.
If what the article says is true about google vetting problem drivers with mere user reviews, they don't know what they are getting into.
I think the idea is great of course, and I imagine it would cut down on freeway traffic during commute hours. It just seems that the legal web of trust, insurance, safety, etc will be a lot to handle.
Funny because that is Lyft's (n Zimride) original model. The more things change, the more they stay the same.
I can't help but to think of Ford Prefect's Electronic Thumb from H2G2.
So, basic question, What is the difference between Alphabet and Google again? It seems like everything is still being branded as Google. I know it's slightly off topic, but I am honestly confused as to when something is not Google.
Other night a pizza server at a shop next door said she was very slow. It's summer and nobody buys pizza. She added it costs her $10 one way cab ride and makes nothing for the day.
This is the type of news I would post on employee board when it comes to my area.
I see this as something similar. But I was never able to find someone to car pool with.
However, how can it be viable to the driver. I understand if someone is already going in that direction they can make a little money but if I want to live on it (like Uber is pitching) will the price be enough?
I can't help but wonder if this ride sharing is a similar move. It sounds like a stepping stone for the kind of services that might be practical with self-driving cars. There might be some angle on collecting data that isn't obvious.
Talk about panic catch-up with no intrinsic advantage, nor vision. "Mountain View, start your photo-copiers". We know where that ends...
Larry and Sergei have shown in the past 3 years that they have no staying power on anything that isn't an obvious profit lay-up in short order. This thing will burn through cash at a rate that will make any of their other ill-fated ventures look like a bargain. I mean, UBER has already coughed 1.2 yards this year!
Smells like Google+ all over again. Isn't this the sort of sham that the Alphabet carve-out was supposed to avoid?
This is progress, but we've got farther to go.
This is what happens when you take glyphosate and multiply the cost/production price by well over 50x and package it as some miracle product. If your marketing strategy is to mislead every single purchaser then it is no surprise that you lose sight of how many strands of bullshit marketing you are running.
Roundup is a great example of one of those products that are cash cows for companies that market themselves as "the best solution"
Every.Single.One of the roundup products is glyphosate, and that stuff will kill anything and is very very cheap.
Dear readers be aware - glyphosate is a chemical that is present in all weedkillers (except the really shitty ones) so buying the brand name is a total waste of your money, and the amazing people at Roundup HQ know it. Buy the no-name, unbranded stuff.
I can understand Coca Cola etc selling sugar water for huge margins, but I pull my hair out when it comes to something like glyphosate. That's how I get my roots under control.
Doesn't that create bad incentive for whistleblowers not to do anything until it's too late?
This is more than enticing!
EDIT: Nevermind, I did not read the article before commenting. My mistake.
Basically as we get to understand exactly how cells work and how bacteria do what they do, and how they change. We won't need to scrounge around in the dirt to find something, hopefully, that will kill bacteria. We'll engineer what ever we need to kill what ever cells we want to kill.
But there are other ways. One way is to follow the thread of research opened up by William B. Coley who developed Coley's Toxins, a cocktail of bacterial toxins that sparked the body's own defense mechanisms and in many cases, caused cancer tumors to turn to jelly within days and start being reabsorbed by the body. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1888599/ has more about him.
Or the way of genome therapy where researchers are studying the active genomes in both healthy and sick (or cancerous) cells to understand what knobs and buttons exist in the human organism that we might be able to adjust by means of various therapies, sometimes even benign ones. There is evidence that one of the many hundred subtypes of cancer will respond to everyday blood pressure medication. This is a relatively benign drug that, in the right conditions, will kill cancer cells. Of course, the right conditions include that the patient has certain specific genes. But genomic techniques ca discover these genomic markers and help us sort out the mechanisms by which cells resist attacks from hostile bacteria. The ultimate outcome for cancer would be that your doctor takes a biopsy of the cancer cells, their active genomes are analyzed and this information is used to build a molecular machine that manufacture a custom drug that will cure your cancer.
Look at the molecular machinery of the Polymerase Chain Reaction which makes copies of DNA molecules https://en.wikipedia.org/wiki/Polymerase_chain_reaction
And there is Reverse Transcription which converts RNA molecules to DNA molecules https://en.wikipedia.org/wiki/Reverse_transcription_polymera...
Not to mention the Ribosome which is the molecular machine in your cells which manufactures protein molecules https://en.wikipedia.org/wiki/Ribosome
It's almost too simple to work, but I've heard that over time - thousands of years in fact - it is a strategy humans have used to build not just safe sex, but many other benefits as well http://www.theglobeandmail.com/life/relationships/the-power-...
This entire video series is about inspiring people with a side benefit of illustrating what makes YC special.
Compare this to content by other accelerators (of which there are many). It's not a lecture nor a recital of advice. It's a series of relatable and personable stories with a consistent theme. Start with "Why" you do something, not how or what.
http://www.ycombinator.com/press/ quotes 1297 startups http://www.seed-db.com/accelerators/view?acceleratorid=1011 has 1069 companies (+ ~100 from the S16 class) for ~1200 in total
>>Sometimes they create a small success, and sometimes they create these companies that really transform the world, and YC has been very fortunate to be involved in a lot of these, Airbnb, Dropbox, Stripe, the list goes on.
I mean... really? OK, I'll grant you Airbnb, but Dropbox "transformed" the world?
If you look at the fine print in the published "Guidelines for implementing Net Neutratily"  linked in the article you will see that there are 3 exceptions to the rule (a,b,c). Being "c" the one that should fear us most:
a) "comply with Union legislative acts (...)
b) preserve the integrity and security of the network, of services provided via that network, and of the terminal equipment of end-users;
c) prevent impending network congestion and mitigate the effects of exceptional or temporary network congestion, provided that equivalent categories of traffic are treated equally.
This last one ruins the whole law. And this is not what me as European wanted. ISPs won :(
Then only we could measure that they do offer the same bandwidth with Netflix and Vimeo as they advertise. Net neutrality at its best.
Edit: Of course the number will be very low because they have to (God forbid!) provision their network to serve this bandwidth to all customers during peak hours. But what we're looking for is not a huge number - we're looking for a number that allows meaningful comparison with competitors.
That would hardly have been expected: in the first six months of being a Commissioner, Oettinger met with two NGO representatives but with 44 corporate lobbyists .
And this is why Brexit is so heart-breaking. I'm surrounded by people in my personal life who think it's a fantastic idea, but they're not the most... informed? Likewise for local politicians.
(Side note to my rant: I have this theory that the rise of the iPhone, and the fact that it is such a big part of people's lives now, has fooled regular folks into believing that they're experts on technology. I have no more than anecdotal evidence for this).
I strongly suspect that local legislators will see no conflict whatsoever with scrapping these laws when the exit finally comes, and it saddens me that I'm surrounded by a lot of people that will be cheering when it happens.
This is from a real conversation I had this week:
"What it boils down to is do you want to have us control our own laws and decisions and borders, or have to take orders from some bureaucrat in Brussels that doesn't understand us?"
Yes, I would rather have decisions made by people in Brussels that understand what they're doing.
These are the first 9, the other 10 are here: https://gist.github.com/daveloyall/a1112bb70412d77bebc809090...
This Regulation aims to establish common rules to safeguard equal andnon-discriminatory treatment of traffic in the provision of internetaccess services and related end-users rights. It aims to protectend-users and simultaneously to guarantee the continued functioning ofthe internet ecosystem as an engine of innovation.
The measures provided for in this Regulation respect the principle oftechnological neutrality, that is to say they neither impose nordiscriminate in favour of the use of a particular type of technology.
The internet has developed over the past decades as an open platformfor innovation with low access barriers for end-users, providers ofcontent, applications and services and providers of internet accessservices. The existing regulatory framework aims to promote theability of end-users to access and distribute information or runapplications and services of their choice. However, a significantnumber of end-users are affected by traffic management practices whichblock or slow down specific applications or services. Those tendenciesrequire common rules at the Union level to ensure the openness of theinternet and to avoid fragmentation of the internal market resultingfrom measures adopted by individual Member States.
An internet access service provides access to the internet, and inprinciple to all the end-points thereof, irrespective of the networktechnology and terminal equipment used by end-users. However, forreasons outside the control of providers of internet access services,certain end points of the internet may not always beaccessible. Therefore, such providers should be deemed to havecomplied with their obligations related to the provision of aninternet access service within the meaning of this Regulation whenthat service provides connectivity to virtually all end points of theinternet. Providers of internet access services should therefore notrestrict connectivity to any accessible end-points of the internet.
When accessing the internet, end-users should be free to choosebetween various types of terminal equipment as defined in CommissionDirective 2008/63/EC (1). Providers of internet access services shouldnot impose restrictions on the use of terminal equipment connecting tothe network in addition to those imposed by manufacturers ordistributors of terminal equipment in accordance with Union law.
End-users should have the right to access and distribute informationand content, and to use and provide applications and services withoutdiscrimination, via their internet access service. The exercise ofthis right should be without prejudice to Union law, or national lawthat complies with Union law, regarding the lawfulness of content,applications or services. This Regulation does not seek to regulatethe lawfulness of the content, applications or services, nor does itseek to regulate the procedures, requirements and safeguards relatedthereto. Those matters therefore remain subject to Union law, ornational law that complies with Union law.
In order to exercise their rights to access and distribute informationand content and to use and provide applications and services of theirchoice, end-users should be free to agree with providers of internetaccess services on tariffs for specific data volumes and speeds of theinternet access service. Such agreements, as well as any commercialpractices of providers of internet access services, should not limitthe exercise of those rights and thus circumvent provisions of thisRegulation safeguarding open internet access. National regulatory andother competent authorities should be empowered to intervene againstagreements or commercial practices which, by reason of their scale,lead to situations where end-users choice is materially reduced inpractice. To this end, the assessment of agreements and commercialpractices should, inter alia, take into account the respective marketpositions of those providers of internet access services, and of theproviders of content, applications and services, that areinvolved. National regulatory and other competent authorities shouldbe required, as part of their monitoring and enforcement function, tointervene when agreements or commercial practices would result in theundermining of the essence of the end-users rights.
When providing internet access services, providers of those servicesshould treat all traffic equally, without discrimination, restrictionor interference, independently of its sender or receiver, content,application or service, or terminal equipment. According to generalprinciples of Union law and settled case-law, comparable situationsshould not be treated differently and different situations should notbe treated in the same way unless such treatment is objectivelyjustified.
The objective of reasonable traffic management is to contribute to anefficient use of network resources and to an optimisation of overalltransmission quality responding to the objectively different technicalquality of service requirements of specific categories of traffic, andthus of the content, applications and services transmitted. Reasonabletraffic management measures applied by providers of internet accessservices should be transparent, non-discriminatory and proportionate,and should not be based on commercial considerations. The requirementfor traffic management measures to be non-discriminatory does notpreclude providers of internet access services from implementing, inorder to optimise the overall transmission quality, traffic managementmeasures which differentiate between objectively different categoriesof traffic. Any such differentiation should, in order to optimiseoverall quality and user experience, be permitted only on the basis ofobjectively different technical quality of service requirements (forexample, in terms of latency, jitter, packet loss, and bandwidth) ofthe specific categories of traffic, and not on the basis of commercialconsiderations. Such differentiating measures should be proportionatein relation to the purpose of overall quality optimisation and shouldtreat equivalent traffic equally. Such measures should not bemaintained for longer than necessary.
>"The screening of two subsets of compounds for antiviral activity (Supplementary Fig. 2a and Supplementary Fig. 4b) was performed in a blinded manner, whereas all other experiments were performed in a nonblinded manner."http://www.nature.com/nm/index.html
Also, I searched for the virus strain they used and the first thing I clicked on claimed it has issues with relevance in vivo:
>"Anyone who is using viruses termed ZIKV MR766 needs to carefully examine the sequence composition of their stocks. Multiple viruses all termed MR766 may have different sequences and biological properties.In the case of the MR766 we are using in our studies, there is a deletion in the challenge stock that is strongly selected against quickly in vivo."https://zika.labkey.com/wiki/OConnor/ZIKV-002/page.view?name...
Nicolsamide is also listed as WHO's "one of the most important medications needed in a basic health system."
There should be more drug repurposing researches to be done but it's probably profit-prohibiting for the big pharmas :(
also really liked the search strategy. first try all the things already approved by the fda to find something, even if not 100% ideal since its so much faster than seeking fda approval of something new. Curious how this will change in the near future given the prevalence of deep learning and with some of the opentrons type testing systems.
One area where we've had trouble with other orchestration tools (e.g. Docker Swarm) was in managing resources at anything beyond whole boxes. They are all good at managing CPU/RAM/Disk but we've had trouble with give this task GPU2. We had planned to try Mesos (given that we already run it for other things) but it sounds like maybe we should take a harder look at Kubernetes first.
<0.1 = Too technical for even HN audience0.1-1.0 = At the right level for the HN audience>1 = The topic is similar to painting the bike shed.
It's unfortunate that so much effort has been spent on bringing tools up to speed with Python 3, but some groups still insist on dragging their feet. I understand the motivation when we're talking about an established company with a huge legacy code base, but within the research community it's kind of embarrassing.
For Java/Scala people, Deeplearning4j has a pretty sophisticated Spark + GPUs setup:
[Disclosure: I help create DL4J, and it's supported by my startup, Skymind.]
Would be curious to see the data around the economics of the different options.
On the one hand, if you wanted really pristine independence, it means you are going to need people who dont have commercial ties to the industry, Mr. Gilson said. On the other hand, if you have people without any commercial ties to the industry, they are not much use.
If there were no conflicts of interest at all, then we'd be discussing an article about how horribly In-Q-Tel's investments performed, because they were made by bureaucrats in D.C. who had no idea how startups worked.
Nor do they have even particularly critical national security technology. In-Q-Tel probably has less dark tech knowledge than your average building at NBP.
One of my startups was contacted by one of their employees (it's almost always an employee, not the org directly), and they really just facilitated our data analysis processes, and made some connections. It never got to the stage where we accepted any money, and it was a lot less creepy than some of the government contractors cough Mitre cough.
Disclaimer: I have not, do not, and will not work on mass surveillance technology.
And I have to add, IQT serves a ton of govt agencies, not just the CIA.
Do I get this right: US taxpayers are giving their hard earned dollars so that a government agency CIA can take it and, throu their venture capital firm (!), invest it into something that might, but also might not, work?
Is this actually fiscally responsible? Is it in the job description of an intelligence agency to be a VC fund, not with private money, but with the money of the shrinking US middle class?
Why should I think about using this instead of (or in combination with?) the plethora of other similar offerings out there?
Although this repo https://github.com/baidu/paddle_paddle_model_zoo, suggests they might be working on one.
Given TensorFlow's rising dominance with AI researchers and practitioners and the existence of other frameworks with large installed bases like Theano, Torch, and Caffe, I don't think this new framework has much chance of gaining wide adoption in the US or other markets in the West. In my opinion, TensorFlow's network effects are too large to overcome at this point.
However, Paddle could gain significant adoption in China, Baidu's home market.
EDIT: My opinion could be wrong. To find out, I've created an HN POLL so we can all see which deep learning frameworks the HN community would use to build new products and services today. Link to HN POLL: https://news.ycombinator.com/item?id=12391744
The dirty secret of modern textbooks is that there has been a huge arms race in developing material that makes it easier to teach a course with less work. I don't really think that truly benefits students, since most of this material isn't for the student, it's for the professor (or more cynically for the free labor of grad students used to do a lot of the teaching). But you can't sell a textbook these days without it. I wish there was a universe in which you could sell a simple Psych 101 textbook for $50 and have that be that. But the market won't adopt a simple book like that without the myriad of extras. And since the market here is wonky, meaning the person making the decision (the professor) doesn't bear any of the cost (and might in fact benefit if it's her book), we end up where we are.
This isn't to say that the whole new edition thing isn't worth criticism. Or that there aren't all sorts of things wrong with the industry. But the percentage increase you see in the cost of a textbook for students doesn't just equate to the same percentage increase in the profit margin of textbook publishers. They're not just making the same product from 40 years ago and jacking up the prices 1,000% as the article implies. The product is a whole different beast.
(source: I sit on the board of my family's textbook publishing company)
So I was thinking, I'll just sell them back when I am done and get the money back. Nope, it cost only a small fraction of original price I paid. Because a new version was out already.
Ever since I consider publishers of those books scam artists and crooks. In college I had a meager stipend, and only later got a job on campus for minimum wage -- I had work extra hours at night to make sure to have enough for next quarter when an unpredictable (but high) amount of money would be needed to pay for books.
The other stuff I fucking hate is the free resources given to professors (yay, fiscal budgeting!), but shifts the revenue stream to the student. For example, iClicker, which provides the server for free to the professor but makes the students buy a physical clicker ($50); when this service could just use the fucking web-browser and make the professor/university I pay tuition purchase this REQUIRED part of the course. Or the online homework, which is also a purchased item (like $60-70 too). I don't pay for the organic chem lab equipment (and couldn't; that stuff is expensive!).
To be clear, I have not a problem with using these materials. I also don't mind paying for them. But I ONLY want to pay for it through tuition!
Also, why don't publishers work directly with universities so that every student is provided with a copy of textbook/homework? It seems to me that that would be a sure fire way to prevent lost sales due to the used textbook market/pirating. They could just get the university to purchase pdfs of the textbook for each student every year: it would drive down the incentive to pirate and would mean they didn't need to develop their own shitty-af reader. A win-win.
Anyway, to those still reading: sorry 'bout the language. This status quo drives me absolutely bonkers.
I wonder why students' preferences are even a consideration: if they learn just as well with older, pictureless books why not use those?
> But it's worth askingif professors know these textbooks are absurdly expensive, why assign them? Well, the answer involves a couple of factors, basically: Many professors simply don't know the prices of the textbooks, and, far less frequently, sometimes the professors themselves wrote the book.
FWIW, when I was in school I recall one professor doing his best to ensure that our bill for each of his classes was never more than $50. I think that all of our professors exercised some care in that regard but ours was a small liberal arts college, with every class taught by a professor or associate professor.
I know of 12 year olds who have 10 kg of school books. No one cares.
They talked about how it's more complex then resellers charging mark-ups to meet market demand. Often, venues have either social pressure or pressure from artists to keep prices low. The idea is that if the venue sells Katy Perry tickets at market price (say $2000/each), her fans are going to be upset with not only the venue but also her, so her management will make sure that doesn't happen.
However, simply lowering ticket prices means losing out on tons of money for both the venue and artist. One solution that is used is: the venue offloads large chunks of their ticket pool to ticket resellers (ticketmaster/stubhub) under the condition that they split a % of the markup with the venue and artist. The reseller absorbs the social blame for 'egregious' ticket prices, the venue/artist keep more of the profits than if they had kept prices below market price.
Opera companies run like VCs: most of the audience/startups are losses, but a few of them are donors/unicorns who keep the opera/VC afloat.
Every flight/hotel-comparison website has its goal to make the yield management team as ineffective as possible. Being too effective means running your suppliers out-of-business!
So let's untangle these two sentences near the end of this essay.
1. I believe that the San Francisco Opera should raise its prices to match - or exceed - what this ticket scalper site is asking.2. But lower ticket prices will usually result in more butts in seats.
The entire post is written from the perspective that the Opera should be making the absolute highest amount of money per ticket sale possible. But at the end of the post, the author very casually mentions that this is going to probably mean a lot fewer butts in seats watching opera. And then completely ignores it, going on to suggest ways the SFO could learn from TicketMiddleManThatHopesYouDon'tThinkVeryHard.Com.
I have not been to the San Francisco Opera; I do not know how much it's focused on the bottom line. But I would bet money that, if offered the choice between getting about twice as much per seat, and getting more viewers, the SFO would probably choose the latter. Because arts organizations don't just exist to make money; they exist to spread their art. You have to pay enough attention to the bottom line to be able to pay everyone involved enough to keep doing it, but it should never be an organization's only concern.
The author of this post also offers us no statistics on how many tickets to the SFO are actually bought though the scalper site that's selling them for about twice what they cost versus ones sold directly by the Opera. The fact that someone built a robot that automatically scrapes a ton of venues and offers their tickets at 200% markups, and did some SEO tricks to make it pop up higher than the actual site for people who don't block ads, doesn't mean they're automatically capturing the money of everyone who types "san francisco opera tickets" into Google. Without evidence of that, I seriously doubt his thesis that this is money the SFO is leaving on the table.
I am also pretty amazed by the fact that the closing paragraphs suggesting "dynamic pricing" and noting that it can be very complex to do don't casually link to any of the services provided by the company whose blog this is on, because they seem to be trying to do just that.
1. the existence of a bot that offers San Francisco Opera tickets at a crazy markup doesn't mean enough people actually buy through it to be worth chasing,
2. raising prices to match this bot probably runs counter to the part of an opera house focused of Getting More People Interested In Watching Opera Instead Of Playing Video Games Or Whatever.
3. geeze dude you write an essay this long that's a stealth ad for your company and you can't even slip a link into the concluding paragraph?
Not exactly. I mean yeah, I sometimes pay them, but I simply go out far less because of them. So choose wisely if you're going to go that route!
> Nearly every major venue and sports team sells to brokers. This brings cash in the door and reduces the risk to the venue.
These two comments are in direct opposition to each other. I don't like scalping of tickets, but if a venue is knowingly and willingly selling to scalpers, then obviously they do get some net benefit.
Partial solutions include blocking the CA's certs based on the issuance date or insisting they hand over a list of the certs they've issued - but if the CA is going down in flames anyway, they have no incentive to cooperate; they can backdate certs and destroy their own customer list.
My theory is  this is one of the side benefits of Certificate Transparency - CT will give browser vendors a list of certs to grandfather in if they decide to shut down a CA against its will.
1) WoSign may face revocation (I doubt it but I don't know), but there is no evidence of that in this article. This is just one person not affiliated with a root program "calling for" it. People on the internet call for revocation of major CA roots all the time.
2) I don't really know what a "fake" cert is, it's a very strange choice of words. I would think a fake cert is not a real cert, and in that case issuing fake certs is fine because browsers won't trust them. It seems the problem here is that real certs were issued when they shouldn't have been. That's called "mis-issuance", not "fake certs."
If a CA pulls shit like this they need to be revoked immediately and let the wrath of 1000s of businesses that are impacted by cert warnings rain down upon them. That will 1) Solve the security problem immediately and 2) Publicize what it means to get a cert from a crap CA that doesn't care about security.
Sure it will suck for the "little guy" who didn't know but, if you don't do this, he'll never know and never learn.
I realize that X.509 name constraints are utterly broken, but that doesn't mean that browsers can't manually restrict the domains that a given root is accepted for.
Did the StartSSL root CA change hands / was it sold to a Chinese company (Wosign?)
I seem to remember the CEO used to be vocal in various ssl and ca forums and on bugzilla earlier.... But no comments lately?
> Possible fake cert for Alibaba, the largest commercial site in Chinahttps://crt.sh/?id=29884704
> Possible fake cert for Microsofthttps://crt.sh/?id=29805555
Yikes. If all of that is true, surely Google will permanently ban WoSign from Chrome? And I would hope Mozilla and Microsoft, too, but Google is usually the one to "play tough" with rogue CAs (and I hope they will strive to develop and maintain that reputation).
I would much rather see a recommendation and efforts around Uglify than a brand new minimization tool, and I would rather see work on acorn rather than around a new parser, because these things fragment the community rather than bring them together.
Edit: Yes, Uglify has taken a long time to get work done on ES2015, but note that the spec has changed a lot over that time. It does feel like they are dragging their feet or not getting much done in that front, but all the more reason to put one of the Babel devs who may have more free time on the project to get more things going.
At a usability level, babili has a 33M install footprint and must be installed locally in the directory in which you intend to use it.
uglify-js can be installed globally and has a 1.9M install size.
And I've never understood how to configure babel's preset and plugin files. Why is it necessary for a minifier to do this? Surely this can be greatly simplified.
Anyhow, what's this thread about?
The reason is that the page design assumes its on a mobile phone which its somewhat unique tall portrait orientation. And sure enough, looking at the page on my phone it looks a bit flat but it works well.
So now we are in a place in the web where browser users get the crappy UX experience because someone spent all their time focused on the other community and really didn't bother to make their pages responsive to both.
IMHO, the big chance anyone had to disrupt FB was a paradigm shift away from social news feeds. That paradigm shift arrived with photo sharing and messaging on mobile, where increasingly people were just sharing pictures and text messages privately. However, Zuckerberg saw that one coming and acquired Instagram and WhatsApp to head off any disruption.
That shouldn't stop people from trying to innovate. But we should not regard being smaller than the leader as a failure. I use G+, Twitter, and FB, but I have the best conversations on G+. Twitter discussions are an exercise in frustration, and I find the signal/noise on FB to be worse.
There's a benefit sometimes to having a smaller audience.
Instead, they decided that 2nd place was not enough, said "just kidding, it was actually an identity service, no, wait, a content discovery platform, yes!", and turned into... whatever it is they are doing today.
They could have been the Facebook that is not Facebook, or in Randall Munroe's words, "all I really wanted". Too bad.
Seems like an awful waste of space.
I want my context to be preserved in the tab that Im in. I didnt log out of Google+ so I should be logged-in still.
These are the basics. Before they Material-Design-the-hell-out-of everything, maybe they should create a foundation that works properly.
I am writing some feedback hoping that some googler will read this and improve something..
But there's more! If I click inside the search bar, i get a 2 seconds pause to load "featured collections", "featured communities", "Suggested People & Pages" and "Suggested Posts" REPLACING THE PAGE I'M ON because Google clearly knows better.
Collections is like following random Pinterest boards created by other people. I don't know about others, but I like to follow official things, or things that have the most followers but none of that info seems to be surfaced. Featured really means nothing to me. Is it hand-picked? Randomly generated featured? Are they paid to be featured?
Communities are cool but it's really hard nowadays to beat the communities in subreddits. Anonymous users seem to give a lot more to the community in an unselfish manner vs. Google+ users seem to post in communities in a self-promoting manner. This could just be anecdotal and my subjective viewpoint but that's what I see.
I honestly don't know how Google can do social, but I'm glad they're trying different things. Hopefully they try something new.
Random thought: I find Slack very similar to Google Wave.
I don't care what it looks like, if it doesn't work, I won't use it.
I also used the Hangouts on Air feature extensively and never understood why it had to be originated in + and why you _had_ to invite people. The best use case for this tool was to do screen recording that was automatically imported into YouTube but getting the right combination of + account and YouTube account and making sure you were authorized to use Hangouts on Air with YouTube was incredibly frustrating. Hopefully the new flow using YouTube Live will allow going on-air without forcing you to invite an audience.
And while I'm at it - suggesting people join Communities but hiding the fact that there are sub-topics in these communities was a huge dark pattern I hated. I joined the Linux community thinking I would see some interesting packages or hacks or discussions and all I found was perpetual posting of Wind0w$ is teh Suck memes and obvious spam. The _majority_ of my interactions on + was marking posts as spam and blocking users hoping content would improve, and it never did. It wasn't until much later that I figured out I could unsubscribe from sub-topics, if I could only find where they were listed.
I usually hate one-note comments about the web platform of the postings, but since this is a blogpost about Google Plus webdesign, hosted on Google Plus... I guess it's telling.
No, I will not start using it again.
Meanwhile, they are trying to make the guber car service.
It's hard to understand from the outside.
For something that's supposedly social, I'm deeply disappointed with how Google+ has been developed (read neglected) over time. I like some aspects of G+ (like the layout, font, font sizes, etc.), but two things that are grating are the lack of custom URLs and the unintuitive navigation scheme (compared to Facebook). I still post to G+ once in a while (although, there's really no audience there) and look for improvements with the hope that I can start nudging people away from Facebook and get more traction on an alternative platform (another walled garden, but at least not as evil, IMO). It's sad, for me, that even long wait times don't show much for progress. If the strategy seemed convoluted while Vic Gundotra was managing it, his departure left the platform languishing as if it were a part time project.
Anyone from the Google+ team reading this - firstly, please bring in basic stuff to the platform that's important for people to share, and secondly, please copy Facebook shamelessly in whatever it's doing well for user experience.
Lastly, thanks a lot for (reverting to and) retaining the freedom of users to use pseudonyms on the platform!
By the way, the new design looks great I just wished the performance could be better for Firefox.
[Edited for clarification]
Please allow me to remove shit from my 'recommendations'.
In between Android developer things which I like, I'm getting weird-ass pro-Trump BS that looks like Stormfront. I guess this stuff is all that's left on G+, but still. Not interested.
It just needs more users. It is great to follow developers on.
Here is a seed list of active G+ users that may be relevant to you (remember how empty it was when you first signed up for Twitter? You had to follow some seed people...):
https://plus.google.com/+ElijahLynn (Elijah Lynn, myself, web developer)https://plus.google.com/110558071969009568835 (Koushik Dutta, Android dev)https://plus.google.com/+JonoBaconProfile (Jono Bacon, former community manager for Ubuntu)https://plus.google.com/+ChrisWeber (web developer)https://plus.google.com/110043970153071176315 (Chad McCullough, Linux/BSD guy)https://plus.google.com/+UrsHlzle (Sr. VP of Tech Infrastructure @ Google)https://plus.google.com/+DerekRoss (Phandroid)https://plus.google.com/+KirillGrouchnikov (User interface engineer on the Android project at Google)https://plus.google.com/+LukeWroblewski (Author of Mobile First, Product Director @ Google)https://plus.google.com/+BensonLeung (USB cable guy, Google)https://plus.google.com/+DaedTech (Software Engineer, Writer)https://plus.google.com/+IlyaGrigorik (Performance Engineer at Google)https://plus.google.com/+DanielleBuckley (G+ Team at Google)https://plus.google.com/+ChetHaase (Sr. Software Engineer at Google)https://plus.google.com/+GoogleChromeDevelopershttps://plus.google.com/+googlehttps://plus.google.com/+GoogleMapshttps://plus.google.com/+JonathanZacsh (Software Engineer)https://plus.google.com/+AddyOsmani (Engineer at Google)https://plus.google.com/+DonnaPeplinskie (Front end developer)https://plus.google.com/+NityaNarasimhan (Engineer, Consultant)https://plus.google.com/+IanHickson (author and maintainer of the Acid2 and Acid3 tests, Google)https://plus.google.com/+JeffreyZeldman (A List Apart)
Not active but still:https://plus.google.com/+LarryPagehttps://plus.google.com/+SergeyBrinhttps://plus.google.com/+EricSchmidt (former CEO at Google)https://plus.google.com/+SundarPichai (CEO at Google)
He seems to be a serial abuser of his team and stakeholders, and the forging of payment receipts was just downright criminal. I hope there is enough evidence now for criminal action to be brought against him, and he is prevented from ever trying to run a company again in the foreseeable future.
According to their Crunchbase  (which could obviously be incomplete), there's no institutional investors listed in their initial or seed round. I mean it's one thing to waste other people's money on a dumb idea, but this seems masochistic.
edit: Another befuddling thing besides the enormous headcount was the fact that the original whistleblower was brought in from Dallas and offered equity and a $135K salary and a signing bonus. That seems like a huge chunk of money for a marketing executive at a small tech startup. Again, ignoring the reality that he couldn't really afford her, what did the CEO imagine would be investors' reaction to such a pricey hire?
Or that it pretty much 'confirms' their guilt for a lot of people?
But yeah, not surprising this has happened already.
I've encountered people who reported title inflation on their LinkedIn page, oversold what they did, and when starting their own venture, due diligence pointed out severe faults the individual papered over.
Good or bad, things will catch up with you eventually.
It's been said that "love is a better master than duty," and requiring college students to take a physical education course sounds like a duty.
If you want people to exercise without coercion for the rest of their lives, I think you have to tap into intrinsic motivation, i.e., the unique things that get you excited. Depending on your personality and life experience, that might be novelty, socializing, recognition, or competition.
Forcing college students to exercise will backfire, leading many of them to stop as soon as the course is over.
This thread is a good argument that many people don't choose effective health without coercion.
It's probably a good idea for public health for the government to sponsor to the point of "effectively free" adult sports such as running/soccer/swimming, along with requiring such each semester for state-sponsored schools.
It's probably past time for me to put on my running gear and take regular evening jogs, myself. :-)
And schools can always give several options to accommodate different interests and capabilities.
It certainly makes sense for them, considering how much energy they expend to run that fast. So perhaps there's precedent.
Not entirely true - in fact further down in the article:
"Furthermore, says HUHS director Paul Barreira, the same surveys show that students own sense of health and well-being tracks the amount of exercise they report getting. Those with the most depression and anxiety also get the least exercise. The happiest students get the most."
So the penalties include depression, anxiety, and in future, propensity for heart disease, diabetes, hypertension, etc., etc. So either we haven't evolved enough to consider the penalty of dying young and having a more miserable existence while alive, or is it just that evolution optimizes to the point of reproduction and then doesn't care about much after successfully handing down genes to the next generation?
(Edit: I might also ask, without putting forth a theory or guess, what statistics say about obesity and likelihood of reproduction? If health is poorer than non-obese humans, is there a lesser chance of successful reproduction? If so, shouldn't that be considered the ultimate penalty?)
But if I need to do something like fix a car, build something, whatever, I'll work myself to utter exhaustion and not even notice.
[EDIT:] It occurs to me that this is the sort of amenity that would be basically free for Harvard to provide. Because it would be a highly visible part of students' lives, hordes of donors will line up to sponsor individual bikes (with donor nameplates), pay stations (likewise), named chairs for bike mechanics, or even the whole program.
Make your organism always maintain optimal muscles and circulation system while burning a lot of calories? Seems like a win/win to me.
Some people are strong and healthy without excersice, I think we are all entitled to same package?
I had PE in University (and before that school), hated it and now I'm turned off from any excercise.
Currently Infinit.sh has my attention the most, but it's quite young still.
edit: https://news.ycombinator.com/item?id=12125344 this thread seems to be talking about what i want. With that said, i'm not yet sure if `mc mirror` supports Backblaze, as that (per price point) is my prime need
- Spin up a bunch of droplets on DigitalOcean, because I want reliability, etc.
- What's the best way to share drive space across these to create a single Minio storage volume, so if one DO node goes away I don't lose my stuff?
How does something like this behave with really large files. Video files in 100s of gigabytes, for example. I'm asking because if one could set up a resilient online (online as in available) storage with fat pipes like this it could be used as a platform to build a centralized video hub for editing. It's another question how much sense would it make over a filesystem though.
also, failure and backup modes.
Install it on your phone? Anyone you have in your phone's address book gets to see your picture under "people you may know".
Someone in your family joins Facebook and friends you? Now everyone you are friends with gets prompted about whether or not they know your family member.
Want to delete some pictures you uploaded to Facebook? It's extremely difficult and they must be deleted one by one.
Other than LinkedIn, I'd say FB is the prime innovator of UI dark patterns that exploit users' unwitting behavior for profit.
The youngest generation of internet users gets this which is why they largely do not use Facebook. Soon they will realize that IG and Whatsapp are connected, and will avoid those too.
What's interesting to me is that the recommendations are fundamentally not useful. It's easy to look someone up by searching for their name without the privacy-invading helpful suggestions.
People You May Know still had old high school friends, my old real estate broker (??), and someone I starred on GitHub. I have absolutely no idea how they connected that account to my old one, considering Google Mail is the only other service I've used on that laptop.
TLDR#2: The recommendation to "prevent" these issues on the individuals side is, "Lisas medical community has started recommending that patients concerned about privacy not log into Facebook or other social media accounts at medical offices, or even leave their phones in their cars during appointments. "
This is about as practical as recommending people just figure out how to fly and occasionally levitate into the upper atmosphere to go out of the cell tower's range, move a few kilometers west, and then fly back down to earth to scramble all these tracking algorithms.
Back in March I laid out how they could use a private set intersection protocol to enable any pair of users to privately share their contacts: https://news.ycombinator.com/item?id=11289223 (I'm not posting this to shame them or something: March wasn't that long ago for developing a feature like this, and of course it's open source; I could develop it myself and submit it to them).
I think it's something they care about; they've just not found a solution they're comfortable with yet.
* "You're both friends of Duffman McPartyDude" * "We found Psycho Ex Boss's phone number in your contacts" * "Location Services confirms you were both frequenting a dubious drinking establishment at 4am three Saturdays ago"
Many possibilities here:
1 - whatsapp connection with messages exchanged
2 - contact list loaded by whatsapp
3 - psychiatrist secretary number in whatsapp
4 - friends in common
5 - places in common
Imagine the possibilities . What a wonderful world!
 If this were to come true, then the word "possibilities" would be replaced by "synergies" :)
Granted, that's Schmidt, rather than Zuckerberg. The attitude seems to be the same, though.
Facebook is full of shit. Of course they are using locations, why else would I get suggestion to friend the guy that cuts my Mother in Law's yard - he stops by for a check from my wife.
It makes sense that people using the same access point or connecting to Facebook from the same external IP would likely know each other.
I got friend recommendations from FB for other members of the support group.
Such a terrible excuse. FB you only have one job! Fail.
All these people have one friend in common with this person, maybe they know each other as well? Being a psychiatrist or whatever has nothing to do with it.
EDIT: I stand corrected. Not so simple regarding where they get the "potential friendship" data from. Diagonal reading mistake on my part.
Cost of living here is low. Decent basic house is $150-200k in the burbs.
Awesome office space is cheap. Great tax incentives from KS and MO to start companies and create jobs.
I honestly would have no interest being in the valley trying to compete with the likes of Google, Facebook and others for top talent. In KC we can be the cool company everyone wants to work for!
We are Stackify, located in Leawood Kansas! :-)
I still love the area so I'm still sticking around for a bit longer, but I don't really see what this guy's seeing.
Wait, it was 2012 and the venture capitalists were /just then/ figuring out it would be a good idea to be close to the "best computer science school in the world"?
Not that I had any respect for VC's before, but my opinion just dropped even more.
> ... entrepreneurs are building more billion-dollar companies in the Midwest than in the last 50 years combined.
Yes it's called currency devaluation combined with ZIRP.
I do wonder if they'll write such a flowery article when entrepreneurs discover that their odds of success actually go UP if they pass on VC altogether.
We've been building a 100% remote startup since late 2011 - it begun in the midwest (Missouri) and it has worked extremely well. We don't even have a location at this point and can't imagine a situation in which we'd elect to.
Life is inexpensive and good in Arizona and occasional flights to Silicon Valley are short and affordable. I spent my life in California until we moved to Arizona 18 years ago. Benefits of Arizona: very friendly people (try chatting with people in California when you are in a grocery store check out line, at the gas pumps, etc. - a culture of unfriendliness), low population density, very affordable housing, clean air and nice climate (at least where I am in the mountains), our state government is not as badly in debt as California, etc.
I know many talented software developers and know multiple VCs in the area. This is a hub for top-noch universities, a hub for every major pharmaceutical company, most major financial firms have a large presence here, including the largest mutual fund company. Plus, it is a short trip to New York (media/Wall Street), and DC (government and military), which brings lots of subcontracting projects.
In short, there is a ton of opportunity, talent, and money.
Nitpicks from the article:
"California is the eighth largest economy in the world. The Midwest is the fifth"
really? source on that? What states exactly does that entail?
"The Midwest receives 25 percent of all research dollars in America and graduates more computer science degrees than any other region or country on planet earth."
"home of the leading computer science school in the world, Carnegie Mellon"
Not one of the leading, but THE leading CS school. Even better than MIT? Better than Berkeley? Better than Caltech? I'm not saying any one of those is THE best, but I would argue they are all top-notch.
I wonder if Llama Soft ever feels bad about stealing the name of the still-extant British indy game house Llamasoft for much more boring purposes.
Austin? I dunno if it's really a startup hub or not.
This reads like a parody until it inexplicably turns out not to be.
Columbus and Ann Arbor both have big, important universities smack in the middle of town, and the demographics and culture that go with that. If you were choosing the least-surprising towns outside of the Bay Area and NYC to find software startups in, those would pretty much have to be on your list.
Because you want to sleep close to your money.
"The answer, we concluded, had everything to do with timing. In the first few decades of the Internet, you had to be physically close to your technology. That technology, the talent that built it, and the ecosystem that maintained it was in Silicon Valley."
C'mon. You didn't need to be close to your technology. PCs and servers and backbones can be installed anywhere.
You wanted to sleep close to your money. But now you're willing to loosen the leash a little if it gives you more bang for the buck.
Put another way:
"Nobody will ever need more than 640k."
Since the days of Fairchild, the bay area has had an advantage in finance, and of course VCs who didn't want to invest in anyone more than a bike ride away. The level of attitude is really kinda astounding.
California was the golden state... but that was 50 years ago when it had a pro-business government and was disrupting the US industrial base, and attracting high tech talent and companies in the process. Now it is eating itself, drowning in debt and getting ever more desperate due to decades of bad government.
In our industry things have changed. The need for VCs is dramatically less. You don't need to build servers and software from scratch, you can rent servers in the cloud and build on open source foundations.
There are always going to be more smart people outside of SV than there are in it. And TBH, I don't think the SV startups have been really innovative for the past couple decades. It's like after the dotcom boom everything became an "app" (instead of a "dot com" eg website).
For example you aren't building Intels and Apples anymore. you're building Googles (Which might become an Apple, if one of its moonshots turns into a product, but it isn't yet, it hasn't weaned itself from search and search's days are numbered.) You're building facebooks and ubers and airbnbs. It may not look like if if you've not been thru a couple bubbles but those companies are more flash in the pan than solid. Facebook could be more properly called a fad.
The difference is they aren't really technology companies. Google invented page rank, but almost nothing fundamental since. Facebook hasn't invented anything really. It's just a popular content site. Uber and AirBnB are business models, not technology inventions. Of course they all require and depend on technology investment, but they are not based on technological innovations like Intel and Apple are. (And don't even get me started on Amazon which is really a combo of Walmart and U-haul-for-computers. They actively oppose innovation.)
And for some high tech industries, Silicon Valley is not the center. The disdain upon which californians look at bitcoin is a good example. It's kinda a "not invented here" syndrome.
It's like we're the internet, you're AOL at its peak. You rule the roost right now, but you don't get it.
The tone, though positive, is quite patronizing to midwestern ears. If you're not investing here, you're missing a huge opportunity.
Many of these items seem correct to include in "A comprehensive history of the most important events that shaped the SSL/TLS and PKI ecosystem, however it feels very inconsistent in inclusion.
Dates are given when browser implement protocol support, but not OpenSSL, NSS, etc. (Actually, nothing positive is said about OpenSSL at all.) Also no mention of Nginx, Apache or IIS and their TLS/SPDY support/features?
Brian Smith is mentioned by name working on a Rust crypto library, but no mention of DJB when discussing ChaCha20-Poly1305? (Is Ring actually used by any major projects so far?)
It came in the wake of 'Lucky 13' and the demonstration of RC4 biases exploitable in TLS, and showed the awkward situation that existed at the time: essentially all supported ciphersuites were vulnerable to something, and no mainstream browser supported TLSv1.2 yet in which non-vulnerable ciphersuites were present.
Even if a reference isn't made to the blog post, the timeline should somehow reference the aforementioned ciphersuite conundrum.
This was a very impactful proposal that changed the way browsers preferred ciphersuites. But it also removed some lesser-used ciphersuites based off of telemetry , including the block cipher Camellia, which was the only other modern block cipher in TLS after AES.
That stuff is really hard, even when the text is ostensibly present in the PDF (as opposed to the PDF being an image of text). Thing is, it's all just "draw text" commands in the content streams (basically postscript programs). The text commands appear in no particular order and you generally have to compute the layout to see even where the spaces are (which is still a guessing game, because the PDF generator will vary the space width to achieve a visually pleasing format).
OCR is an approach that wasn't quite ready for prime time back then, so it's cool to see people working on it!
I think this is totally off base. It assumes that safety innovation is only achieved via regulation, and cannot be achieved in any other way.
It never ceases to amaze me that people still think roads wouldn't be built without the government, or that they would be sub-par. People think that without government, the world would be in complete anarchy, when the only examples we have close to that show the opposite(http://www.independent.org/publications/tir/article.asp?a=80...).
How practical could it be to have laws against trolling? There are online groups who have rules against doxxing, who seem to be able to enforce them. I don't think it would unduly contravene free speech, if public spaces on the internet assumed consent to communication between individuals, with the option of individuals revoking consent. This would be analogous to what people do in public spaces. Then the act of creating a throwaway account on Twitter to continue harassment would be analogous to continue to harass someone in real life. There would be exceptions, but generally, this would be definable and enforceable. (And I would also say, it would be more constructive than picking a figurehead like Milo Yiannopoulos and banning him as a scapegoat. Even if enforcement could only catch a few people.)
Sorry, you are not better than any other human.
If anything, roads and cars should be more like software. Which is not to say no laws. But the laws should not be so many volumes as to be unapproachable and prescriptive and there should be automated systems for testing and validation.This is certainly doable for software.
We nailed it with arrays (Jan Kneppers idea), the basic template design, compile-time function execution (CTFE)
It's a bit sad that Rust is getting all of the attention in the spotlight, because D is a great, modern, safe language. If you've only heard about D 15 years ago and never tried it again, give it another look. The current D is really a new language, which was briefly called "D2" for a while.
Now, if Symantec could just fix the stupid licence of the reference compiler...
It seems like a very solid language. You can write low level C like code if you really want to, but it defaults to safer, higher level code without losing much efficiency.
I'm going to try it out with some bigger projects.I have some issues. Windows support seems a little flakey but it's ertainly usable (the default dmd compiler works very well on windows but the code it makes isn't the best it could be, and ldc makes much higher quality code, but isn't quite stable on windows. (Although it's certainly looking good enough to use).
I think it's well worth a look. I like it a great deal more than Rust
What kind of question is that? Anders Hejlsberg was what, 39 when he started working on C# and 52 with Typescript? It's the same as with screenplays. You start writing when you have experience.
Video: C++, Rust, D and Go: Panel at LangNext '14:
Key team members or inventors of those languages, speak.
I can't find the link at the moment, but I think it's interesting because despite it being the simple solution, they eventually realized it was holding back adoption amongst business users.
Edit: Found the article: "Babe Ruth and Feature Lists"  where former Google PDM Ken Norton writes:
>We were relying on the browsers rich text surface, which used HTML as the underlying data format and caused browser compatibility nightmares. Youd bold a word and then be unable to un-bold it. Bulleted lists couldnt be edited or deleted without screwing up the whole page or turning everything into a bullet. Centering a line would often center-align the entire document. Formatting bugs had been an annoyance for Googlers, but they were fatal for groups of students who needed to print a term paper to a teachers exacting specifications.
0 - https://library.gv.com/babe-ruth-and-feature-lists-1818bb8c6...
For an even-poorer-man's google docs, I like to keep this bookmarked:
data:text/html, <html contenteditable>
Why ContentEditable Is Terrible, Or: How the Medium Editor Works (2014) (medium.com)https://news.ycombinator.com/item?id=11487667
But isn't it skeumorphic to have pages based off of real paper sizes?