hacker news with inline top comments    .. more ..    20 Aug 2017 Best
home   ask   best   4 months ago   
visited
1
Why We Terminated Daily Stormer cloudflare.com
851 points by SamWhited  3 days ago   1512 comments top
1
r721 3 days ago 14 replies      
>This was my decision. This is not Cloudflares general policy now, going forward, Cloudflare CEO Matthew Prince told Gizmodo. I think we have to have a conversation over what part of the infrastructure stack is right to police content.

(from internal email)

>Let me be clear: this was an arbitrary decision. It was different than what Id talked talked with our senior team about yesterday. I woke up this morning in a bad mood and decided to kick them off the Internet. I called our legal team and told them what we were going to do. I called our Trust & Safety team and had them stop the service. It was a decision I could make because Im the CEO of a major Internet infrastructure company.

http://gizmodo.com/cloudflare-ceo-on-terminating-service-to-...

2
Essential Phone, available now essential.com
798 points by Garbage  2 days ago   661 comments top 5
1
zanny 2 days ago 12 replies      
Since no one else has, I'll take the piss out of this "hollier than thou" bullshit.

> Devices are your personal property. We wont force you to have anything you dont want.

Devices are your personal property. The SoC is still a proprietary trade secret, the baseband is still spying on you for the NSA, the GPU is still a closed blob piece of shit. No mainline driver support, bootloader is closed source, firmware is closed source. We own this phone, you don't.

> We will always play well with others. Closed ecosystems are divisive and outdated.

....

> Devices shouldnt become outdated every year. They should evolve with you.

Devices become outdated because shitty vendors refuse to open source and mainline drivers for their components.

> Technology should assist you so that you can get on with enjoying life.

Technology should be trustable, and a device where you cannot tell if or when the microphone and/or camera are recording and being remotely accessed is anything but.

Not wanting to single Essential out too much here - every vendor goes on and on about how great this phone is for you, while holding as much of a vice grip over the operation of the device as possible to make sure you need to buy another one as soon as possible through planned obsolescence. It is just the stick up the ass language announcements like these use is really infuriating when the people making them know full well how much they are screwing you over.

The first actually open platform phone is the one that will have longevity. The rest are snake oil about how good they will take of you because you can't take care of yourself with your own software that you can trust.

2
foobaw 2 days ago 4 replies      
As someone who worked in a large OEM company releasing tons of smartphones, I'm actually impressed it only took 100 people to getting this out. I presume there was an incredible amount of sleepless nights, as this is no easy task.

To be fair though, Sprint is one of the easier carriers to work with after T-Mobile. I can't imagine them releasing a phone on AT&T or Verizon, as their process is grueling. I guess since they're selling an unlockcked version of their phone, it doesn't really matter to power users. However, most sales for smartphones are from contracts sold directly from carriers so it'll be interesting to see how they'll do in the market with their current strategy (similar to One Plus One).

Props to them though. It's not just about carrier certification. Releasing a smartphone is a long complex process. Some engineers at Sprint were briefly talking about how great the phone was, so I have high hopes.

3
ariofrio 2 days ago 7 replies      
Give me software updates for 7+ years, then we'll talk about buying your $700 phone. Lasting hardware means nothing without lasting software.

In the meanwhile, I'll keep buying $120 phones (Moto G4 with Amazon Ads FTW) and keeping them for ~2 years until they break or software updates stop. Even though as a Catholic (Laudato Si, Rerum Novarum) it kills me to waste all those materials every couple of years and be part of the environmental degradation of our planet.

4
Hasz 2 days ago 9 replies      
You want fixable and well designed, long software updates, and a good price?

Buy an (old) iPhone.

I've got a 5S -- still perfectly fast for what I use it for (email, youtube, brokerage account, general internet, some small games), and is getting OS updates and security patches until IOS11. It's $120 on eBay; a new screen can be had for $13, a new battery for $11. it's solidly designed and there's a gigantic field of accessories and apps.

Maybe titanium and no bezels are worth a price premium, but there's no way it's worth a 5x increase in price.

5
git-pull 2 days ago 17 replies      
I admire the gumption of making a new phone.

But controlled obsolescence kills me. The real feature that improves in phones the past few years for me is the software and apps, not the hardware.

My wishlist:

- Give me a lighter, snappier OS. Not something clunkier and slower and uses more ram, gpu/cpu (aka battery life).

- Actually support updates to the things for longer than 2-3 years.

- (Not related to this phone) Use stock android, unless you're removing bloat. Why? Because inevitably there's going to be apps. What I want is a nice flat surface that includes wifi, bluetooth, and nice API's and permissions for those apps to plug into.

- The biggest feature you can give me on a phone? Battery life, Replaceable battery, Data/Cell reception, Speaker/Microphone quality.

- SIM card that's easy to get out.

- Actually, Dual SIM's.

- Support for carriers globally.

- And physical keyboards. Something for SSH'ing with.

3
Explaining React's license facebook.com
902 points by y4m4b4  1 day ago   537 comments top 4
1
kevinflo 1 day ago 17 replies      
I love love love love love react as a technology, but this is just awful. I believe any developer not on Facebook's payroll still contributing to React or React native at this point has a moral obligation to stop. I personally feel like such a fool for not taking all this seriously before the ASF gave me a wakeup call. React is a trojan horse into the open source community that Facebook purposely and maliciously steered over time to deepen their war chest. Maybe that's an overblown take, but they had a perfect opportunity here to prove me wrong and they didn't. The defensive cover they present here feels so paper thin.

Even if we paint all of their actions in the most favorable possible light, and even if the clause is a paper tiger as some have claimed, it doesn't matter. This is not how open source should work. We should not have to debate for years if a project's license is radioactive. Especially individual devs like myself who just want to use a great tool. We should be able to just use it, because it's open and that's what open means. This is so much worse than closed. It's closed masquerading as open.

2
DannyBee 23 hours ago 3 replies      
So, i feel for them, having watched Google's open source projects be targeted by patent trolls in the past. But i really don't think this is the way forward.

A few things:

1. If you want to suggest you are doing this as part of an attempt to avoid meritless litigation, you really should give concrete examples of that happening. Otherwise, it comes off as a smoke screen.

2. The assertion is that if widely adopted, it would avoid lots of meritless litigation. This is a theoretically possible outcome. Here's another theoretically possible outcome of wide adoption of this kind of very broad termination language: Facebook is able to use other people's technology at will because nobody can afford to not use their stuff, and no startup that they decide to take technology from, and say "no more facebook/react/etc for you" could realistically launch an effective lawsuit before they died.Assume for a second you think Facebook is not likely to do this. If widely adopted, someone will do it.Nobody should have to worry about this possibility when considering whether to adopt particular open source software.

(there are other theoretical outcomes, good and bad).

It's also worth pointing out: None of this is a new discussion or argument. All of the current revisions of the major licenses (Apache v2, GPLv3) went through arguments about whether to use these kinds of broader termination clauses (though not quite as one sided and company focused), and ultimately decided not to, for (IMHO good) reasons. I'm a bit surprised this isn't mentioned or discussed anywhere.

These kinds of clauses are not a uniform net positive, they are fairly bimodal.

3
jwingy 1 day ago 2 replies      
I wonder how Facebook would feel if all the open source software they currently use incorporated the same license. I bet it would deter them from enjoying much of the code they built their business on. This stance seems pretty antithetical to the goal and spirit of open source software and I really hope it's not the beginning of other companies following suit and 'poisoning' the well.
4
eridius 1 day ago 3 replies      
> We've been looking for ways around this and have reached out to ASF to see if we could try to work with them, but have come up empty.

There's a pretty obvious solution to this: relicense React. The fact that Facebook isn't even considering that is a pretty strong indication that they "weaponized" their license on purpose.

> To this point, though, we haven't done a good job of explaining the reasons behind our BSD + Patents license.

I think we already understand the reasoning behind it.

> As our business has become successful, we've become a larger target for meritless patent litigation.

And the solution you chose stops merit-ful litigation as well.

> We respect third party IP, including patents, and expect others to respect our IP too.

Clearly you don't, because you've intentionally designed a license to allow you carte blanche to violate other companies' patents if they're dependent enough upon React to not be able to easily stop using it.

4
E-commerce will evolve next month as Amazon loses the 1-Click patent thirtybees.com
591 points by themaveness  1 day ago   218 comments top 38
1
jaymzcampbell 1 day ago 4 replies      
Setting aside the madness that is the patent itself ever being granted, what I found most interesting on that post was that this could now (possibly) become an actual web standard in the future:

> the World Wide Web Consortium (W3C) has started writing a draft proposal for one click buying methods.

The W3C site itself has a number of web payment related proposals in progress[1]. The Payment Request API, in particular, looks pretty interesting (updated 2017-08-17). I wonder what a difference something like that would've made back in the day when I was bathed in Paypal SOAP.

[1] https://www.w3.org/TR/#tr_Web_Payments

2
tyrw 1 day ago 7 replies      
I ran an ecommerce company for about a year, and one click checkout was the least of our concerns when it came to Amazon.

The speed of delivery, prime benefits, brand recognition, and willingness to lose money on many if not most items are absolutely brutal to compete against.

I'm glad one click checkout will be more broadly available, but it's probably not going to make much of a difference...

3
mseebach 1 day ago 7 replies      
The space (from my online shopping experience) seems to be divided between Amazon (with one click checkout, fast delivery etc) and everyone else (42 click checkout and one week delivery, if you're lucky).

If the one-click patent was a major inhibitor of competition, I'd basically expect to see a lot of two-click check out options. Instead I find myself creating a million redundant user accounts, telling people that my mothers maidenname is "khhsyebg" (she's got some Dothraki blood, it seems) and parsing "don't not uncheck the box if you wish to prevent us from causing the absence of non-delivery of our newsletter and also not abstaining from passing on your details to third parties".

4
NelsonMinar 1 day ago 1 reply      
The 1-Click patent was the genesis of a long debate between Jeff Bezos and Tim O'Reilly about software patents. It resulted in the formation of BountyQuest, a 2000-era effort to pay bounties for prior art for bad patents. Unfortunately it didn't really work out. But the history of arguing about software patents is pretty interesting. http://archive.oreilly.com/pub/a/oreilly//news/patent_archiv...
5
dboreham 1 day ago 7 replies      
I have been buying from Amazon for 20 years and have not once used 1-Click.
6
pishpash 1 day ago 1 reply      
This patent prevented a nefarious checkout pattern across myriad potentially unscrupulous store fronts for more than a decade so was it really so bad? ;)

Some days I feel Amazon was not only the world's largest non-profit organization but also among its most beneficent!

7
masthead 1 day ago 7 replies      
Still can't believe that this was a patent!
8
TheBiv 1 day ago 4 replies      
NOTE that the Registered Trademark of "1-Click" will still be valid and owned by Amazon

http://tmsearch.uspto.gov/bin/showfield?f=doc&state=4808:d4r...

9
romanhn 1 day ago 0 replies      
"E-commerce will change forever" ... strong words. Amazon has features that are a much bigger value proposition than one-click purchases. I don't see this changing the landscape in any significant way.
10
wheaties 1 day ago 2 replies      
"They have proposed ways of storing cards and address data in the browser..."

Oh hell no! Just what we need, yet another reason for people to attack your browser. Don't we already suggest to never use the "remember your password" button? Now, it's "remember your credit card." No. Please, just no.

11
dpflan 1 day ago 0 replies      
When the news about Soundcloud's future emerged, discussions turned through some thoughts about how to help SC keep its roots and grow into what it can be rather than be a Spotify competitor. The Amazon One-Click patent was brought up about how to allow buying the song / supporting the artist/record label you're enjoying.

Perhaps there is a chance now for SC (and others) to use this? (It'd be interesting to see how often the patent thwarted any business decisions. Also, I wonder if this was considering in the funding round...)

Here is the comment:> https://news.ycombinator.com/item?id=14991938

Here is the parent HN post:> https://news.ycombinator.com/item?id=14990911

12
philfrasty 1 day ago 0 replies      
...e-commerce will change forever...

Simply from a legal standpoint this is BS. In some countries you have to display the customer a whole bunch of information and terms before he can make the purchase.

Just because Amazon ignores this due to their size and $$$ doesn't mean everyone can.

13
10000100001010 1 day ago 0 replies      
I have never used one-click but I have relatives that compusively purchase off Amazon with one-click all of the time. It is almost a drug to them because they click a button and then stuff shows up at their door. For some users, removing all barriers except for a click is sufficient to get them to buy.
14
novaleaf 1 day ago 1 reply      
anecdote: I use Amazon for practically all of my shopping, only supplementing it by going to a brick-and-mortar for food.

I have never used the "buy now" feature, so honestly I think it's impact is a bit overblown.

Here are my reasons I never use it:

1) I do a lot of comparison shopping, so I like to review my orders before the final purchase. (in case I put something in my cart and then later added something better)

2) I want to make sure I don't order something under $35 and get stuck paying for expedited shipping (which is free for prime members over $35 in purchases)

3) I have a few addresses and cards on file, and want to make sure the order will use the right one.

4) I use the cart as a temporary list, anything that looks interesting during my shopping session gets thrown in there (or perhaps another browser window if doing comparisons).

15
jwildeboer 1 day ago 0 replies      
As a former core developer of OSCommerce, where our users were threatened with patent infringement over exactly this, I will order a nice glas of whiskey, celebrating this thing is finally over. This one patent made me join the fight against software patents in Europe, which we sort of won in 2005.
16
drcube 1 day ago 2 replies      
This is a "feature" I actively avoid. Why in the world would anyone want to buy something online without a chance to review their purchase? Other web pages don't even let you leave the page without asking "are you sure?".
17
stretchwithme 1 day ago 0 replies      
Buying things with 1 click is not an Amazon feature I've ever cared to use.

The right product at the right price, fast. That's what matters.

18
ComodoHacker 16 hours ago 0 replies      
I'm sure Amazon has already filed an application for "Zero-click checkout". Something like "swipe over a product image in a 'V' pattern to checkout", etc.
19
clan 1 day ago 7 replies      
I have always hated the thought that retailers stored my credit card information. Seems to be very common with US based shops.

If this gets any traction I will need to fight even harder to opt out.

I yearn for the day I can have one off transaction codes.

20
blairanderson 12 hours ago 0 replies      
Businesses use that shit. They don't have time and often don't care about the little details.

Businesses are the customers you want.

21
amelius 1 day ago 0 replies      
Reminds me of the joke I read somewhere about a "half-click patent", where the purchase is done on mousedown instead of on click.
22
summer_steven 1 day ago 0 replies      
This is almost like a patent on cars that go above 60 MPH. Or a website that takes less than 50 ms to load.

They have a patent on the RESULT of technology. The patent SHOULD be on THEIR VERY SPECIFIC IMPLEMENTATION of 1-click checkout, but instead it is on all implementations that result in 1-click checkout.

Patents really are not meant for the internet...

23
benmowa 1 day ago 0 replies      
"These are the ones [credit card processors] we have worked with in the past that we know use a card vault. Others likely support it too"

Note The more common term is credit card Tokenization, not just Vaulting, and is not required for 1-click if the merchant is retaining CC numbers. - although this is not recommended due to PCI and breach liability.

24
wodenokoto 20 hours ago 0 replies      
Does Amazon even use this themselves? I have fewer clicks going product page to purchase confirmation on Aliexpress.com than on Amazon.com
25
vnchr 1 day ago 0 replies      
Would anyone like something built to take advantage of this? I'm open next week between contracts (full-stack JS), maybe there is a browser extension or CMS plugin that would make this feature easy to implement?
26
samsonradu 1 day ago 0 replies      
Interesting to find out such a patent even exists. Does this mean the sites on which I have seen the one-click feature implemented were until now breaking the patent?
27
dajohnson89 1 day ago 0 replies      
The # of returns are surely higher for 1-click purchases -- wrong address, wrong CC#, no chance to double-check you have the right size/color, etc.
28
nocoder 1 day ago 2 replies      
Does this mean the use of the term "1-click" will no longer be exclusive to Amazon or is that a part of some trademark type stuff?
29
tomc1985 1 day ago 0 replies      
Oh joy, now everyone's going to have that stupid impulse buy button. Yay consumerism, please, take my firstborn...
30
sadlyNess 1 day ago 0 replies      
Hope its going to be added to the payments ISO standards. If that's a fitting home along with the W3C move, is it?
31
ThomPete 1 day ago 1 reply      
So quick product idea.

Make a Magento integration that allow ecommerce sites to implement it?

32
radicaldreamer 1 day ago 0 replies      
Anyone know if a company other than Apple currently licenses 1-Click?
33
likelynew 1 day ago 0 replies      
Has there been any court case for the validity of this patent?
34
yuhong 1 day ago 0 replies      
I remember the history on Slashdot about it.
35
minton 1 day ago 0 replies      
Please stop calling this technology.
36
perseusprime11 1 day ago 0 replies      
Amazon is eating the world.The loss of this patent will have zero sum impact.
37
kiflay 1 day ago 0 replies      
Interesting
38
pdog 1 day ago 1 reply      
> No one knows what Apple paid to license the technology [from Amazon]...

This is factually incorrect. Of course, there are executives at Amazon and Apple who know how much was paid to license the one-click patent.

5
Firefox Focus A new private browser for iOS and Android blog.mozilla.org
615 points by happy-go-lucky  1 day ago   298 comments top 32
1
progval 1 day ago 9 replies      
According to F-Droid [1], it contains `com.google.android.gms:play-services-analytics`.

[1]: https://gitlab.com/fdroid/rfp/issues/171#note_30410376

2
hprotagonist 1 day ago 3 replies      
I installed Firefox Focus for iOS simply for its content blocker. I still prefer using mobile safari, but augmented with three content blockers:

- Firefox Focus, which blocks all sorts of stuff

- 1Blocker, which blocks all sorts of stuff

- Unobstruct, which blocks Medium's "dickbar" popups.

3
lol768 1 day ago 2 replies      
Have been using this a while, it's really nice as the default browser to open links in. Having the floating button to clear everything is neat and I like the UI desing. It's also really fast.

I'd like to see better support for getting SSL/TLS info - why can't I tap on the padlock and get the certificate info (EV, OV, DV?), cipher suite, HSTS etc?

4
rcthompson 1 day ago 1 reply      
This is useful to use as your default browser. It has a quick way to open the same link in another browser, so you can use it as a sort of quarantine to vet unknown links before exposing your main browser and all its juicy user data to a new website.
5
ghh 16 hours ago 2 replies      
Focus does not seem to erase your history in a way you may expect. Try this on Android:

- Erase your history.

- Go to HN, click any link you haven't clicked before.

- Wait for it to load.

- Erase your history. Make sure you see the notification "Your browsing history has been erased".

- Go to HN again, and see the link you've just clicked still highlighted as 'visited'.

6
Xoros 1 day ago 2 replies      
How is this news ? I installed it weeks ago on my IPhone. I don't understand why Mozilla just announced it now. Maybe it's a new version.

On the browser itself, I launched it, navigate on a URI, closed it, relaunched it, type the firsts characters of my previous URI and it auto completed it. From my history I guess.

So it's not like incognito mode on other browsers. (Haven't retested again)

7
bdz 1 day ago 8 replies      
I wish open source projects publish the compiled .apk file not just the source code.

If I want to install this on my Fire HD I either have to download the .apk from some dodgy mirror site or install Google Play with some workaround on the Fire HD. Cause Firefox Focus is not available in the Amazon App Store. I mean yeah I can do both in the end, not a big deal, but I just want the .apk nothing else.

8
computator 1 day ago 3 replies      
This would have been perfect for iPad 2's and 3's on which Safari and the normal Firefox keep crashing under the weight of the current bloated web.

But alas, the "simple and lightweight" Firefox Focus actually requires a heavyweight 64-bit processor:

> Why aren't older Apple products supported? Safari Content Blockers (which include Firefox Focus) are only available on devices with an A7 processor (64-bit) or later. Only 64-bit processors can handle the extra load of content blocking, which insures optimal performance. For example, since the iPad 3 has an A5 processor, Firefox Focus is incompatible.[1]

Come on, iPad 2's and 3's are less than 5 years old. There has to be some way to keep the iPad 2 or 3 alive if all you want to do browse the web.

[1] https://support.mozilla.org/en-US/kb/focus

9
cpeterso 1 day ago 1 reply      
Since I started using Firefox Focus for one-off searches, I'm surprised at how infrequently I really need to be logged into any websites to complete my task. Nice that Focus simply clears all those trackers and search history when I close it.
10
nkkollaw 1 day ago 5 replies      
So, if I understand this correctly... It's a regular browser, but like you're always in private mode + it's got a built-in ad blocker?

If I want to check Hacker News let's say 5 times throughout the day and feel like leaving a comment, I have to login again, without autocomplete..?

Maybe I'm missing something.

11
fiatjaf 1 day ago 1 reply      
> For example, if you need to jump on the internet to look up Muddy Waters real name

Best idea ever. That's the most common use case people have and one that's drastically underserved by current browsers.

If people can't get their browser to quickly open a link to simple stuff, it means the web is failing. If the web is failing they'll quickly jump over to sending images over WhatsApp or fall into the trap of using the Facebook app for all their needs that could be otherwise served by the web.

12
webdevatwork 1 day ago 1 reply      
Firefox Focus is great. It's amazing how much better web readability and performance gets when you block most of the adtech garbage.
13
ukyrgf 1 day ago 0 replies      
I love Focus. I wrote about it here[1], albeit poorly, but it just made me so happy to be able to use my phone again for web browsing. Sometimes I open Chrome and the tab that loads was something I was testing weeks prior... it's taken that big of a backseat to Firefox Focus.

[1]: https://epatr.com/blog/2017/firefox-focus/

14
x775 1 day ago 0 replies      
I have been using this for a while on one of my phones (OnePlus 5, newest version of OxygenOS) and am fairly satisfied with its overall performance. It works seamlessly for casual browsing - i.e. opening pages from Reddit or similar. I however cannot help but feel as if the standard version with appropriate extensions (i.e. Disconnect, uBlock Origin and thus forth) remains a better alternative than Focus in solving the very issues Focus seeks to accommodate. I do very much love how closing the browser erases everything though. It is worth mentioning that the ability to install extensions is exclusive to Android for now, so Firefox Focus has become my go-to-browser for my iOS devices. If you have Android the above is worth considering though!
15
gnicholas 1 day ago 2 replies      
I love Focus and now use it for almost all of my mobile googling. One thing that would be nice is a share extension, so that when I'm in Safari and see a link I want to open I can share it to Firefox Focus. Right now I have to "share" it to [copy], open Focus, and paste it in. Not a huge hassle, but would be nice to streamline.
16
st0le 1 day ago 1 reply      
Hasn't it been available for a while now?
17
byproxy 1 day ago 1 reply      
There is also the Brave browser, which I believe covers the same ground : https://play.google.com/store/apps/details?id=com.brave.brow...
18
makenova 11 hours ago 0 replies      
My favorite feature is that it blocks ads in safari. I'm surprised more people and Mozilla aren't mentioning it more.
19
wnevets 1 day ago 1 reply      
I've been using it as my default browser for Android for a while and I like it. The only thing I don't love is the notification saying the browser is open, it triggers my "OCD" . I understand why it's there but I wish there was some way around it.
20
api_or_ipa 1 day ago 3 replies      
Why can Firefox build a browser with 16mb and yet every other app on my phone is 80+mb?
21
noncoml 1 day ago 2 replies      
Looks awesome and fast. Exactly whats needed and expected from Mozilla. Thank you!

Can we have something similar for desktop as well?

22
AdmiralAsshat 1 day ago 0 replies      
Hmm. Just visited a few of the pages I normally visit on my phone in Firefox for Android, and immediately got several pop-ups and banners that don't normally get through.

So I'd say its adblocking is still less effective than regular Firefox for Android + uBlock Origin add-on.

It does feel quite speedy, though. Could possibly be what I start using in the future to read HN articles.

23
bllguo 1 day ago 0 replies      
I've been loving focus. Fastest mobile browser I've used. Appreciate the privacy features also.

I set it to my default browser and keep chrome handy on the side.

24
hammock 1 day ago 1 reply      
The headline in this submission fails to deliver the primary message of the actual op, which is that Firefox focus is a lightweight mobile browser. That it blocks third-party tracking by default is secondary
25
aorth 18 hours ago 0 replies      
I've been using this on and off for a few months since it came out. It's very smooth and enjoyable to use for looking things up quickly. But Samsung Internet Browser's[0] content blockers (AdBlock Fast, Disconnect) is also smooth and does a better job of blocking ads than Focus. Neither are as good as uBlock Origin, of course, but then you must use the "real" Firefox on Android which is not very smooth and feels very foreign on Android.

[0] https://play.google.com/store/apps/details?id=com.sec.androi...

26
manaskarekar 1 day ago 1 reply      
There's also the duckduckgo app, which seems similar to this, although not sure how they differ.

https://duckduckgo.com/app

27
k2enemy 1 day ago 0 replies      
Does Focus have a "HTTPS everywhere" feature? I didn't see mention of it on the site, so I'm guessing not. That is one thing that I'm sorely missing on iOS.

Edit: It seems not: https://github.com/mozilla-mobile/focus-ios/issues/155

28
sweep3r 1 day ago 2 replies      
New? How come I've been using it for years?
29
moosingin3space 1 day ago 1 reply      
I like this app a lot -- very fast and convenient.

Could this someday integrate Tor, making it sort of an amnesiac Tor Browser for mobile?

30
ohsnapman 1 day ago 0 replies      
I've been using this on Android for weeks. It's super fast, blocks a lot of annoying ads (think jumpy mobile overlays). No bookmarks or tabs, so if you're looking at a recipe for a dish you're making, there's always a chance it gets wiped. Just use Chrome for that. Highly recommend.
31
JepZ 1 day ago 0 replies      
I wouldn't consider tabs 'every beta bell and whistle' :-/

If you can live without tabs, try it, it's great.

32
abawany 1 day ago 2 replies      
I have really come to appreciate this browser even though I have been using it for just about a month. It is fast and clean. Clicking links from email and knowing they will open in effectively a private browser instance is a great feeling. I missed the multi-tab feature for a little bit but have now adjusted.
6
Vue.js vs. React vuejs.org
592 points by fanf2  17 hours ago   368 comments top 17
1
pier25 12 hours ago 13 replies      
We moved away from React to Vue about 8 months ago and everyone on the team is a lot happier.

First reason is we hate JSX. It forces you to write loops, conditionals, etc, outside of the markup you are currently writing/reading. It's like writing shitty PHP code without templates. It also forces you to use a lot of boilerplate like bind(), Object.keys(), etc.

Another problem with React is that it only really solves one problem. There is no official React router and we hated using the unofficial react-router for a number of reasons. A lot of people end up using MobX too.

With Vue there is no need to resort to third parties for your essential blocks. It provides an official router and store called Vuex, which IMO blows Redux out of the water when combined with Vue's reactive data.

Vue docs are probably one of the best I've used. They provide technical docs, plus excellent narrative docs (guides) for all their projects (Vue, Router, Vuex, templates, etc).

I won't say that Vue is perfect, but we would never go back to React.

If you don't like Vue but want to get out of React, check out Marko, the UI library by Ebay. It's better in every way than Vue or React except that the community and ecosystem are almost non existent.

http://markojs.com/

2
a13n 13 hours ago 15 replies      
You'll see quotes in this thread like "The demand for both React and Vue.js is growing tremendously" thrown around. It's good to check out npm install stats to get an unopinionated comparison.

https://npm-stat.com/charts.html?package=react&package=vue&p...

In reality, React is downloaded roughly 4-5x more than angular and 7-8x more than Vue. In August so far, React has 75% market share among these three libs. Interestingly, this share has grown in August compared to both last month (July) and beginning of year (January).

While this thread and the license thread might indicate that React is dying, it's not. It's growing.

If Vue is going to be what React is today, it has quite a long way to go.

3
Kiro 12 hours ago 7 replies      
I've built semi-large applications in both Vue.js and React. I like both but prefer React.

For me Vue.js is like a light-weight Angular 1, in a good way. It's very intuitive and you can start working immediately. It does however easily end up in confusion about where the state lives with the two-way binding. I've run into a lot of implicit state changes wrecking havoc. The declarative nature of React definitely wins here, especially working with stateless functional components. If you're serious about Vue you should adhere to unidirectional bindings, components and use Vuex.

The best thing about Vue.js for me is the single file components. It's such a nice feeling to know that everything affecting a certain component is right before your eyes. That's also the reason I started adapting CSS-in-JS in my React components.

The biggest problem for me with Vue.js is the template DSL. You often think "how do I do this complicated tree render in Vue's template syntax? In JSX I would just use JavaScript". For me, that was the best upgrade going from Angular to React and it feels like a step backwards when using Vue.js.

4
blumomo 2 hours ago 1 reply      
In this thread people are fighting about their _opinions_ why they use Vue.js or React. And why X is really better than Y.

In reality these programmers don't want to have the feeling they might have made the wrong choice when they used X instead of Y. The idea that they might have taken the poorer choice hurts so much that they need to defend their decision so heavily while in reality taking ReactJS or Vue.js is like ordering pizza or pasta. You usually don't want to have both at the same time. So you need to explain why pizza is better than pasta tonight. Only that you usually have to stick longer around with Vue.js or ReactJS once chosen. Enjoy your choice and solve real problems, but stop fighting about it, programmers. Pasta and pizza will always both win.

5
spion 13 hours ago 2 replies      
To me the whole idea of client-side HTML templates seems bad. They start out easy enough, but then they either limit you in power or introduce new and weird concepts to replace things that are easy, familiar and often better designed in the host language.

Here is an example on which I'd love to be proven wrong:

https://jsfiddle.net/j2sxgat2/2/

Its a generic spinner component that waits on a promise then passes off the fetched data to any other custom jsx. It can also take onFulfill and onReject handlers to run code when the promise resolves.

The concrete example shown in the fiddle renders a select list with the options received after waiting for a response from the "server". An onFulfill handler pre-selects the first option once data arrives. The observable selected item is also used from outside the spinner component.

With React+mobx and JSX its all simple functions/closures (some of them returning jsx), lexical scope and components. With Vue I'm not sure where to start - I assume I would need to register a custom component for the inner content and use slots?

6
conradfr 6 hours ago 1 reply      
In a way VueJS is "React for those who liked Angular1".

I've done many Angular apps. I've done a bit of React (with Reflux & Browserify).

I tried moving to React/Redux/Webpack but it's not an easy task to grasp the whole thing. Webpack itself was close to make me throw the towel on side projects.

I tried VueJS because of a job interview and quite liked it and got productive really fast thanks to good documentation and my previous experience in angular & React.

Professionally I wouldn't mind any of those but for side projects it will be VueJS from now on.

As a side note I don't get why all the boilerplates always mix backend and frontend code and dependencies. If you're not interested in a node backend and learning it's overwhelming.

The worst thing is that boilerplates you find are always outdated (router, hot-reloading etc) and worst of all mingling server and client deps so if you're not interested in a node backend you have to

7
keyle 14 hours ago 5 replies      
I've used both. What makes me pick Vue in the end is the fact that there is no compiler needed, no jsx and all the non-sense that goes with that.

If you want a full blown huge application to last years, then go Angular... Although who knows if Angular will be there in 5 or so years.

There is no perfect library/framework but I love Vue because Vue does exactly what it says on the tin.

8
kennysmoothx 14 hours ago 2 replies      
I used React for a few years and it was great and powerful, there were many things however that I disliked.. Particularly I was not a fan of JSX. I liked React but I did not feel comfortable using it.

When I first saw VueJS I had a hard time understanding how it would be any better than React, that is until I saw single file components.

https://vuejs.org/images/vue-component.png

I fell in love with the eloquence of being able to separate my HTML, JS, Styles for a single component.. it seemed /right/ to me..

In any case, I've been using VueJS ever since for my new projects moving forward and I'm very happy with it. It has everything I would ever need from React but in what I feel is a more polished and thought-out way.

Just my two cents :)

9
bluepnume 1 hour ago 0 replies      
For me, the best thing about jsx is the fact that it's so easy to create different implementations of `React.createElement` to solve different problems outside of the scope of React itself. Want a render function that uses jsx but returns a string, a DOM tree, or something entirely different? Totally possible! I didn't like jsx at first, but the fact that it's so decoupled from React itself counts extremely in its favor, in my view.

For example, for one of my libraries I wanted a really simple way to allow people to include custom templates. I started with string-based templates, but quickly realized that this was inflexible for any kind of event binding or later dom mutations. Achieving that was super easy with jsx -- I just implemented a custom jsx function which directly outputs DOM elements.

Step 1: implement a jsxDom function [^1] which takes (name, props, children) (e.g. https://github.com/krakenjs/xcomponent/blob/master/.babelrc#...)

Step 2: point babel to my jsxDom function (e.g. https://github.com/krakenjs/xcomponent/blob/master/src/lib/d...)

This even allowed some cool hacks like including iframes directly in the jsx, to sandbox sections of the page (since the library is geared towards creating components to be embedded on 3rd party sites):

 containerTemplate({ jsxDom }) { return ( <iframe> <style> p { margin-top: 40px; } </style> <p>I can be sure the styles in this iframe won't affect the parent page</p> </iframe> ) }

10
ergo14 13 hours ago 3 replies      
Another option that is interesting right now is Polymer 2.x if you haven't tried it recently, give it a shot.

https://vuejs.org/v2/guide/comparison.html#PolymerThere are some similarities shared between polymer and react/vue (personally only used react and angular 1.x before).

I've built applications with it using polyfills and things worked just fine with legacy applications on IE10 + jquery interacting with web components.

Performance is nice, there is more and more adoption from giant enterprises like Netflix, IBM, GE, Gannett, Electronics Arts, CocaCola, ING, BBVA.

Webcomponents.org has over 1k components to choose from and is growing.

Now with `lit-html` arriving soon we might see alternative to JSX if someone wants that, polymer-redux or polymer-uniflow is available as an option too.

https://hnpwa.com/ - one of fastest Hacker News implementations is based on Polymer - and that is even without SSR.

SvelteJS also seems nice, although it seems one-man project for now :( On Polymer end I hope that on the summit next week they will announce proper NPM support finally and I miss that.

11
jvvw 13 hours ago 0 replies      
For anybody looking into vue.js for the first time, I highly recommend starting with Laracasts series of screencasts which I found much more helpful than the information on the vue.js site itself when I was getting to grips to with it:

https://laracasts.com/series/learn-vue-2-step-by-step

12
jameslk 13 hours ago 4 replies      
These are the things I find to be killer features of React and it's ecosystem:

- React Native (I know there's Weex but it's not production ready, nor as feature rich)

- Streaming server side rendering

- React Relay (GraphQL integration)

- JSX by default. VueJS pushes an Angular-esque template language where I have to learn new syntax, binding concepts, directives and conditional statements.

- Corporate backing

I've used React in very large projects, where these features have been fairly critical. React's licensing is odd but not odd enough for me to ditch it. I'd really hate to see the community churn once again on frontend libraries, but that's JavaScript for you I guess.

13
steinuil 13 hours ago 3 replies      
What's with the hate for JSX? I think handling HTML as data makes much more sense and is much more convenient than dealing with dumb templates or weird DSLs like Vue's or Angular's.
14
zpr 12 hours ago 3 replies      
One of the biggest things React has going for it, other than being maintained by Facebook and having a much larger community, is React Native. You can learn one web framework and now also make compelling, real native apps. To me those are both deal makers.

Surprised I haven't seen this mentioned more in the thread.

15
IgorPartola 9 hours ago 0 replies      
I happily used Angular 1 for a few things. When it became clear that Angular 2 was the party line, I looked at it and unfortunately found an overengineered framework chasing what React was.

I looked at React, but without a cohesive framework, that ecosystem is just a bit too much of a mess. A gazillion starter app templates that can't quite agree and always seem a little out of date with the fast moving components. JSX is just ugly. At least to me. I regularly use Django templates, so the clean leap to Handlebars-style templates feels very natural to me. Redux is straight up unapproachable to someone new to the concepts, but at the same time it feels like it simply reinvents events and event handlers.

Vue was a revelation. It is simple, cohesive and I feel productive.

16
pasta 14 hours ago 2 replies      
My #1: download vue(.min).js and you are ready to go. No packagemanager, bundler and what not needed.
17
colordrops 12 hours ago 1 reply      
I know this is about Vue vs React, but a lot of the things people are saying they like about Vue can be found in Google's Polymer. It's very simple to comprehend and you can make single file components. And it's based on W3C standard web components so you won't be building yourself into future obsolescence. With Polymer 2 components are now built using classes, so your code is clear and 100% native JS. Lastly, the goal of the Polymer team is to get rid of Polymer by advancing web standards, and with 2.0 you can see they are definitely following this approach as Polymer doesn't add much to web components other than ease of use functionality such as template binding and cross browser shims.
7
Peanut allergy cured in majority of children in immunotherapy trial theguardian.com
476 points by DanBC  2 days ago   169 comments top 26
1
hanklazard 2 days ago 2 replies      
Physician-scientist here. My graduate work was in an immunology lab. Just wanted to clear up some confusion I've seen in multiple posts.

While both peanut allergy and celiac disease involve pathogenic immune responses, they represent very different types of problems and this study's results do not suggest any relevance to celiac.

The peanut allergies that they are referring to in this study are one of the most striking examples of what's known as a Type I hypersensitivity (IgE-mediated/anaphylaxis). In this type of reaction, high levels of IgE, a class of antibody, generated toward a specific antigen become loaded onto mast cells and on re-exposure, cause mast cell degranulation and subsequent smooth muscle contraction. For this reason, anaphylactic responses frequently involve closing of the airway, nausea/vomiting, and other dysregulations of smooth muscle activation and require a strong adrenurgic agonist like epinephrine to counteract this activation.

Celiac pathogenesis is not a Type I hypersensitivity. To my knowledge, the exact mechanism of pathogenesis is not known, but it is likely a combination of Type III (antibody-mediated) and Type IV (T-cell mediated) hypersenitivities.

Anyway, I'm not trying to ruin anyone's hope here, but this study has no relevance for celiac. What this has shown is that there is the potential for food allergies to be systematically eliminated with long-term increasing exposure to the problematic antigen, in this case, peanut antigen. This has been done for some time with other, less aggressive types of IgE-mediated conditions like dog and cat dander allergies. So in that way, it's not all that surprising of a result, but I'm certainly glad to see that this was able to be done safely. This is really great news for the millions of people out there with anaphylactic food allergies.

All that being said, I do hope that celiac can be managed more effectively with immune-modulatory (or other) treatments in the future and my sympathies go out to those who have been affected by this horrible disease.

2
S_A_P 2 days ago 7 replies      
My daughter has Celiac disease. It was diagnosed at age 4 when her growth chart showed she did not gain a single pound and grew " from age 3-4. We did a biopsy of her small intestine and it was completely smooth. (Should be almost like velvet) Herblood levels also showed high sensitivity to gluten. We have her on a strict gluten free diet and she has since followed the growth chart perfectly. However she is sensitive enough that she can not eat gluten free food that has been prepared on the same grill/pan/cook or prep surface as food containing gluten. She suffers from nausea and diarrhea when cross contamination occurs. What this means is that I have to cook every meal she eats and bring it with us if we go to restaurants. We live in probably the best time ever for gluten free foods, but this is still a significant hardship for her. She is 7 now and I worry about as she gets older and wants to hang with friends/date/college. Unless things change she cannot just go grab food at a restaurant. Some restaurants have a gluten free protocol (PF changs comes to mind) but this is not common. From what I've read gut bacteria could be a contributor to gluten intolerance. I really hope studies like the peanut allergy encourage other dietary studies and immunotherapy becomes more common. Her having celiac disease is not the end of the world but her quality of life would change drastically if she didn't have to worry about that.
3
gehwartzen 2 days ago 3 replies      
The AAP also recently changed its guidelines for introducing peanuts to babies based on a study [1] showing a pretty dramatic decline in the development of the allergy with early exposure vs total avoidance.

[1]http://www.nejm.org/doi/full/10.1056/NEJMoa1414850#abstract

4
rhexs 2 days ago 4 replies      
Does anyone know the history of why allergists assumed this just wouldn't work for decades? I'm assuming they initially tried this at the dawn of the allergist specialization but gave up due to bad practices / deaths?

I only ask because it seemed to have been general knowledge that this was impossible / couldn't be done up until recently. As a outsider looking it, it seems quite obvious, but that's just due to naivete.

5
jwineinger 2 days ago 3 replies      
I'm a parent of a 4-year old with a peanut allergy. We've been told that anywhere from 18-25% of kids with it "outgrow" the allergy by age 5. I've been looking into private practice oral immunotherapy (OIT) recently, which this protocol seems to be a variant of (adding the bacteria). My understanding is that you start with a low dose and then gradually increase over months until you're eating whole peanuts (4-12 of them) in the morning and evening as a maintenance dose. From what I've found, this can work for many types of food allergies and for all ages and all sensitivities.
6
herewegohawks 2 days ago 1 reply      
Very severe peanut allergy here - honestly go away with this crap of comparing your gluten allergy. I have to carry an epipen and worry about risking my life when I so much as eat food that was on the same table as baked goods that MIGHT have traces of peanut butter.
7
sageikosa 2 days ago 4 replies      
When in her teens, my daughter developed a peanut allergy during her time in drum corps such that it was confirmed with skin patch tests and she had to carry an epi-pen. After about a year it just went away and she's back to "normal".
8
0xbear 2 days ago 1 reply      
True story: in Russia (and I can only assume other Eastern European countries) peanut allergy is so rare that I've never even heard of it until I emigrated. Pollen allergy is about the same, ragweed pollen allergy can be really bad too. But not peanut allergy.
9
6d6b73 2 days ago 0 replies      
I wonder if they had a control group taking only the bacteria, and another one taking only peanut proteins. If not, why did they decide on this combination?
10
nsxwolf 2 days ago 0 replies      
This seems so obvious, and I've been hearing about this approach for years and years. Yet it still feels like 20 years from now, this will still not be a treatment, and kids classrooms will still be "nut free", and more and more kids will be carrying around epi-pens which will still cost a fortune.
12
manmal 2 days ago 3 replies      
Can we derive that Lactobacillus rhamnosus could reduce all kinds of allergies when taken, even without adding proteins that you are allergic to?
13
tmaly 2 days ago 0 replies      
My daughter is allergic to eggs, salmon, and fish in that similar family. Having vegan options in this modern day has been a real help.

I started my food side project https://bestfoodnearme.com with the idea in mind that I can catalog dishes at restaurants based on allergies, gluten free etc. Allergic reactions are a very scary thing especially with small children.

14
matt_wulfeck 2 days ago 1 reply      
What's amazing to me is that they used to recommend you don't give children any peanuts until a specific age, but then they learned that easily exposure actually dramatically decreases the chance of developing an allergy.

I feel like I have to throw away almost all advice they give us about kids these days. These types of things do a lot to undermine the advice of doctors.

15
vanattab 2 days ago 4 replies      
Yumm... I can't wait for the shellfish version! I would love to try shrimp again and find out what all the fuss is about with lobster.
16
justinc-md 2 days ago 0 replies      
If you're in the bay area and considering OIT, a friend of mine is opening a private practice offering only OIT [0], starting next Wednesday in Redwood City. She is currently a full-time clinician at the Sean N. Parker Center for Allergy and Asthma Research at Stanford University.

Her clinic is relatively unique, in that it will be offering multi-allergen rapid desensitization. Using this procedure, a person can be desensitized to multiple allergens simultaneously, in as little as three months. She can treat milk, egg, wheat, soy, peanut, tree nut, fish, and shellfish allergies.

[0]: http://wmboit.com

17
waterhouse 2 days ago 4 replies      
Could this be made to work on allergies in general? The article suggests it could at least be used for food allergies in general.
18
melling 2 days ago 0 replies      
Will this work in adults too?
19
LordKano 1 day ago 0 replies      
I have a young cousin who had a pretty severe nut allergy. After receiving chemo for cancer treatment, she was cured of both the cancer and the nut allergy.
20
zeapo 2 days ago 0 replies      
A previous article (2015) talked about the same study http://www.abc.net.au/news/2015-01-28/probiotics-offer-hope-...
21
cst 2 days ago 2 replies      
48 children were enrolled in the trial. Half of them were given the treatment and half the placebo, leaving 24 children in each group. Statistical significance testing is reported in the article and seems fairly robust, but this is too small a sample size to be fully confident in the results.
22
alfon 2 days ago 0 replies      
23
matt_heimer 2 days ago 0 replies      
Someone watched the Princess Bride - I spent the last few years building up an immunity to peanut powder.
24
jordache 2 days ago 0 replies      
Is nut allergy a rising issue for other parts of the world?
25
Tade0 2 days ago 2 replies      
Interesting how this bacteria is a common ingredient in yogurt.
26
grb423 2 days ago 1 reply      
When I was a kid I never heard of peanut allergies. What happened? Did children's guts change? Did peanuts?
8
Why PS4 downloads are so slow snellman.net
497 points by kryptiskt  11 hours ago   108 comments top 20
1
ploxiln 9 hours ago 4 replies      
Reminds me of how Windows Vista's "Multimedia Class Scheduler Service" would put a low cap on network throughput if any sound was playing:

http://www.zdnet.com/article/follow-up-playing-music-severel...

Mark Russinovich justified it by explaining that the network interrupt routine was just too expensive to be able to guarantee no glitches in media playback, so it was limited to 10 packets per millisecond when any media was playing:

https://blogs.technet.microsoft.com/markrussinovich/2007/08/...

but obviously this is a pretty crappy one-size-fits-all prioritization scheme for something marketed as a most-sophisticated best-ever OS at the time:

https://blogs.technet.microsoft.com/markrussinovich/2008/02/...

Many people had perfectly consistent mp3 playback when copying files over the network 10 times as fast in other OSes (including Win XP!)

Often a company will have a "sophisticated best-ever algorithm" and then put in a hacky lazy work-around for some problem, and obviously don't tell anyone about it. Sometimes the simpler less-sophisticated solution just works better in practice.

2
cdevs 20 minutes ago 0 replies      
As a developer people seemed surprised I don't have some massive gaming rig at home but there's something about it that feels like work. I don't want to sit up and be fully alert - I did that all day at work I want 30 mins to veg out on a console jumping between Netflix and some quick multiplayer game with less hackers glitchin out on the game. It seems impressive what PS4 attempts to accomplish while you're playing a game and yet try and download a 40gig game and some how tip toe in the background not screwing up the gaming experience. I couldn't imaging trying to deal with cranking up the speed here and there while keeping the game experience playable in a online game. Chrome is slow? Close you're 50 tabs, want faster PS4 downloads, close your games/apps. Got it.
3
erikrothoff 10 hours ago 1 reply      
Totally unrelated but: Dang it must be awesome to have a service that people dissect at this level. This analysis is more in depth and knowledgable than anything I've ever seen while employed at large companies, where people are literally paid to spend time on the product.
4
g09980 8 hours ago 3 replies      
Want to see something like this for (Apple's) App Store. Downloads are fast, but the App Store experience itself is so, so slow. Takes maybe five seconds to load search results or reviews even on a wi-fi connection.
5
andrewstuart 4 hours ago 2 replies      
Its bizarre because I bought something from the PlayStation store on my PS4 and it took DAYS to download.

The strange part of the story is that it took so long to download that the next day I went and bought the game (Battlefield 4) from the shop and brought it back home and installed it and started playing it, all whilst the original purchase from the PlayStation store was still downloading.

I ask Sony if they would refund the game that I bought from the PlayStation store given that I had gone and bought it elsewhere from a physical store during the download and they said "no".

So I never want to buy from the PlayStation store again.

Why would Sony not care about this above just about everything else?

6
ckorhonen 10 hours ago 3 replies      
Interesting - definitely a problem I've encountered, though I had assumed the issues fell more on the CDN side of things.

Anecdotally, when I switched DNS servers to Google vs. my ISP, PS4 download speeds improved significantly (20 minutes vs. 20 hours to download a a typical game).

7
Reedx 6 hours ago 1 reply      
PS3 was even worse in my experience - PS4 was a big improvement, although still a lot slower than Xbox.

However, with both PS4 and Xbox One it's amazingly slow to browse the stores and much of the dashboard. Anyone else experience that? It's so bad I feel like it must just be me... I avoid it as much as possible and definitely decreases the number of games I buy.

8
jcastro 10 hours ago 0 replies      
Lancache says it caches PS4 and XBox, anyone using this? https://github.com/multiplay/lancache

(I use steamcache/generic myself, but should probably move to caching my 2 consoles as well).

9
mbrd 9 hours ago 0 replies      
This Reddit thread also has an interesting analysis of slow PS4 downloads: https://www.reddit.com/r/PS4/comments/522ttn/ps4_downloads_a...
10
tgb 7 hours ago 6 replies      
Sorry for the newbie question, but can someone explain why the round trip time is so important for transfer speeds? From the formula I'm guessing something like this happens: server sends DATA to client, client receives DATA then sends ACK to server, server receives ACK and then finally goes ahead and sends DATA2 to the client. But TCP numbers their packets and so I would expect them to continue sending new packets while waiting for ACKs of old packets, and my reading of Wikipedia agrees. So what causes the RTT dependence in the transfer rate?
11
foobarbazetc 8 hours ago 2 replies      
The CDN thing is an issue too.

Using a local DNS resolver instead of Google DNS helped my PS4 speeds.

The other "trick" if a download is getting slow is to run the in built "network test". This seems to reset all the windows back even if other things are running.

12
tenryuu 2 hours ago 0 replies      
I remember someone hacking at this issue a while ago. They blocked Sony Japan's server, of which the download was coming from. The Playstation the fetched the file from a more local server, of which the speed was considerable faster.

Really strange

13
sydney6 5 hours ago 0 replies      
Is it possible that lacking TCP Timestamps in the Traffic from the CDN is causing the TCP Window Size Auto Scaling Mechanism to fail?

See this commit:

https://svnweb.freebsd.org/base?view=revision&revision=31667...

14
Tloewald 9 hours ago 0 replies      
It's not just four years into launch since the PS3 was at least as bad.
15
galonk 3 hours ago 0 replies      
I always assumed the answer was "because Sony is a hardware company that has never understood the first thing about software."

Turns out I was right.

16
jumpkickhit 10 hours ago 0 replies      
I normally warm boot mine, saw the speed increase with nothing running before, so guess I was on the right track.

I hope this is addressed by Sony in the future, or at least let us select if a download is a high priority or not.

17
lossolo 9 hours ago 1 reply      
DNS based GEO load balancing/CDN's are wrong idea today. For example if you use DNS that has bad configuration or one that is not supplied by your ISP, then you could be routed to servers thousands km/miles from your location. Last time I've checked akamai used that flawed dns based system. What you want to use now is what for example cloudflare uses which is anycast IP. You just announce same IP class on multiple routers/locations and all traffic is routed to the nearest locations thanks to how BGP routing works.
18
hgdsraj 7 hours ago 1 reply      
What download speeds do you get? I usually average 8-10 MB/s
19
bitwize 3 hours ago 1 reply      
This is so that there's plenty of bandwidth available for networked play.

The Switch firmware even states that it will halt downloads if a game attempts to connect to the network.

20
frik 10 hours ago 3 replies      
PS4 and Switch have at least no peer-to-peer download.

Win10 and XboxOne have peer-to-peer download - who would want that, bad for users, wasting upload bandwidth and counts against your monthly internet consumption. https://www.reddit.com/r/xboxone/comments/3rhs4s/xbox_update...

9
Afraid of Makefiles? Don't be matthias-endler.de
482 points by tdurden  2 days ago   255 comments top 48
1
ejholmes 1 day ago 15 replies      
Make's underlying design is great (it builds a DAG of dependencies, which allows for parallel walking of the graph), but there's a number of practical problems that make it a royal pain to use as a generic build system:

1. Using make in a CI system doesn't really work, because of the way it handles conditional building based on mtime. Sometimes you just don't want the condition to be based on mtime, but rather a deterministic hash, or something else entirely.

2. Make is _really_ hard to use to try to compose a large build system from small re-usable steps. If you try to break it up into multiple Makefiles, you lose all of the benefits of a single connected graph. Read the article about why recursive make is harmful: http://aegis.sourceforge.net/auug97.pdf

3. Let's be honest, nobody really wants to learn Makefile syntax.

As a shameless plug, I built a tool similar to Make and redo, but just allows you to describe everything as a set of executables. It still builds a DAG of the dependencies, and allows you to compose massive build systems from smaller components: https://github.com/ejholmes/walk. You can use this to build anything your heart desires, as long as you can describe it as a graph of dependencies.

2
chungy 2 days ago 7 replies      
I think the primary thing that makes people fear Makefiles is that they try learning it by inspecting the output of automake/autoconf, cmake, or other such systems. These machine-generated Makefiles are almost always awful to look at, primarily because they have several dozen workarounds and least-common-denominators for make implementations dating back to the 1980s.

A properly hand-tailored Makefile is a thing of beauty, and it is not difficult.

3
bluejekyll 2 days ago 7 replies      
Make is awesome. I have always loved make, and got really good with some of its magic. After switching to Java years ago, we collectively decided, "platform independent tools are better", and then we used ant. Man was ant bad, but hey! It was platform independent.

Then we started using maven, and man, maven is ridiculously complex, especially adding custom tasks, but at least it was declarative. After getting into Rust, I have to say, Cargo got the declarative build just right.

But then, for some basic scripts I decided to pick Make back up. And I wondered, why did we move away from this? It's so simple and straightforward. My suggestion, like others are saying, is keep it simple. Try and make declarative files, without needing to customize to projects.

I do wish Make had a platform independent strict mode, because this is still an issue if you want to support different Unixes and Windows.

p.s. I just thought of an interesting project. Something like oh-my-zsh for common configs.

4
raimue 2 days ago 1 reply      
By using pseudo targets only in the example and not real files, the article misses the main point of targets and dependencies: target rules will only be executed if the dependencies changed. make will compare the time of last modification (mtime) on the filesystem to avoid unnecessary compilation. To me, this is the most important advantage of a proper Makefile over a simple shell script always executing lots of commands.
5
rdtsc 2 days ago 4 replies      
Sneaky pro-tip - use Makefiles to parallelize jobs that have nothing to do with building software. Then throw a -j16 or something at it and watch the magic happen.

I was stuck on an old DoD redhat box and it didn't have gnu parallel or other such things and co-worker suggested make. It was available and it did the job nicely.

6
martin_ky 1 day ago 0 replies      
Due to its versatility, Makefiles can be creatively used beyond building software projects. Case in point: I used a very simple hand-crafted Makefile [1] to drive massive Ansible deployment jobs (thousands of independently deployed hosts) and work around several Ansible design deficiencies (inability to run whole playbooks in parallel - not just individual tasks, hangs when deploying to hosts over unstable connection, etc.)

The principle was to create a make target and rule for every host. The rule runs ansible-playbook for this single host only. Running the playbook for e.g. 4 hosts in parallel was as simple as running 'make -j4'. At the end of the make rule, an empty file with the name of the host was created in the current directory - this file was the target of the rule - it prevented running Ansible for the same host again - kind of like Ansible retry file, only better.

I realize that Ansible probably is not the best tool for this kind of job, but this Makefile approach worked very well and was hacked together very quickly.

[1] https://gist.github.com/martinky/819ca4a9678dad554807b68705b...

7
syncsynchalt 2 days ago 3 replies      
Today's simple makefiles are the end result of lessons hard learned. You'd be horrified to see what the output of imake looked like.

From memory here's a Makefile that serves most of my needs (use tabs):

 SOURCE=$(wildcard *.c) OBJS=$(patsubst %.c,%.o, $(SOURCE)) CFLAGS=-Wall # define CFLAGS and LDFLAGS as necessary all: name_of_bin name_of_bin: $(OBJS) $(CC) $(CFLAGS) -o $@ $^ $(LDFLAGS) %.o: %.c $(CC) $(CFLAGS) -o $@ $^ clean: rm -f *.o name_of_bin .PHONY: clean all

8
AceJohnny2 2 days ago 3 replies      
"Build systems are the bastard stepchild of every software project" -- me a years ago

I've work in embedded software for over a decade, and all projects have used Make.

I have a love-hate relationship with Make. It's powerful and effective at what it does, but its syntax is bad and it lacks good datastructures and some basic functions that are useful when your project reaches several hundred files and multiple outputs. In other words, it does not scale well.

Worth noting that JGC's Gnu Make Standard Library (GMSL) [1] appears to be a solution for some of that, though I haven't applied it to our current project yet.

Everyone ends up adding their own half-broken hacks to work around some of Make's limitations. Most commonly, extracting header file dependency from C files and integrating that into Make's dependency tree.

I've looked at alternative build systems. For blank-slate candidates, tup [2] seemed like the most interesting for doing native dependency extraction and leveraging Lua for its datastructures and functions (though I initially rejected it due the the silliness of its front page.) djb's redo [3] (implemented by apenwarr [4]) looked like another interesting concept, until you realize that punting on Make's macro syntax to the shell means the tool is only doing half the job: having a good language to specify your targets and dependency is actually most of the problem.

Oh, and while I'm around I'll reiterate my biggest gripe with Make: it has two mechanisms to keep "intermediate" files, .INTERMEDIATE and .PRECIOUS. The first does not take wildcard arguments, the second does but it also keeps any half-generated broken artifact if the build is interrupted, which is a great way to break your build. Please can someone better than me add wildcard support to .INTERMEDIATE.

[1] http://gmsl.sourceforge.net

[2] http://gittup.org/tup/Also its creator, Mike Shal, now works at Mozilla on their build system

[3] http://cr.yp.to/redo.html

[4] https://github.com/apenwarr/redo

9
rrmm 2 days ago 3 replies      
Makefiles are easy for small to medium sized projects with few configurations. After that it seems like people throw up their hands and use autotools to deal with all the recursive make file business.

Most attempts to improve build tools completely replace make rather than adding features. I like the basic simplicity and the syntax, (the tab thing is a bit annoying but easy enough to adapt to).

It'd be interesting to hear everyone's go to build tools.

10
qznc 1 day ago 1 reply      
I love Make for my small projects. It still could be better. Here is my list:

* Colorize errors

* Hide output unless the command fails

* Automatic help command which shows (non-file) targets

* Automatic clean command which deletes all intermediate files

* Hash-based update detection instead of mtime

* Changes in "Makefile" trigger rebuilds

* Parallel builds by default

* Handling multi-file outputs

* Continuous mode which watches the file system for changes and rebuilds automatically

I know of no build system which provides these features and is still simple and generic. Tup is close, but it fails with LaTeX, because of the circular dependencies (generates and reads aux file).

11
wyldfire 2 days ago 1 reply      
> You've learned 90% of what you need to know about make.

That's probably in the ballpark, anyways.

The good (and horrible) stuff:

- implicit rules

- target specific variables

- functions

- includes

I find that with implicit rules and includes I can make really sane, 20-25 line makefiles that are not a nightmare to comprehend.

For a serious project of any scope, it's rare to use bare makefiles, though. recursive make, autotools/m4, cmake, etc all rear their beautiful/ugly heads soon enough.

But make is my go-to for a simple example/reproducible/portable test case.

12
mauvehaus 2 days ago 1 reply      
I feel like any discussion of make is incomplete without a link to Recursive Make Considered Harmful[0]. Whether you agree with the premise or not, it does a nice job of introducing some advanced constructs that make supports and provides a non-contrived context in which you might use them.

[0] http://aegis.sourceforge.net/auug97.pdf

13
nstart 1 day ago 4 replies      
So I saw this and thought why not give it a try. How hard could it be right? My goal? Take my bash file that does just this (I started go just yesterday so I might be doing cross compiling wrong :D) :

```

export GOPATH=$(pwd)

export PATH=$PATH:$GOPATH/bin

go install target/to/build

export GOOS=darwin

export GOARCH=amd64

go install target/to/build

```

which should be simple. Right? Set environment variables, run a command. Set another environment variable, run a command.

45 minutes in and I haven't been able to quite figure it out just yet. I definitely figured out how to write my build.sh files in less than 15 minutes for sure when I started out.

14
Animats 2 days ago 2 replies      
The trouble with "make" is that it's supposed to be driven by dependencies, but in practice it's used as a scripting language.If the dependency stuff worked, you would never need

 make clean; make
or

 touch

15
epx 2 days ago 1 reply      
Those who don't understand Make are condemned to reimplement it, poorly.
16
pkkim 2 days ago 2 replies      
One important tip is that the commands under a target each run sequentially, but in separate shells. So if you went to set env vars, cd, activate a Python virtualenv, etc to affect the next command, you need to make them a single command, like:

 target: cd ./dir; ./script.sh

17
misnome 1 day ago 0 replies      
Almost every build system (where I think it isn't controversial to say make is most often used) looks nice and simple with short, single-output examples to demonstrate the basis of a system.

It's when you start having hundreds of sources, targets, external dependencies, flags and special cases that it becomes hard to write sane, understandable Makefiles, which it presumably why people tend to use other systems to generate makefiles.

So sure, understanding what make is, and how it works is probably important, since it'll be around forever. But there are usually N better ways of expressing a build system, nowadays.

18
bauerd 2 days ago 1 reply      
I remember trying to wrap my head around the monstrosity that is Webpack. Gave up and used make, never looked back since
19
DangerousPie 1 day ago 2 replies      
If you want all the greatness of Makefiles without the painful syntax I can highly recommend Snakemake: https://snakemake.readthedocs.io/en/stable/

It has completely replaced Makefiles for me. It can be used to run shell commands just like make, but the fact that it is written in Python allows you to also run arbitrary Python code straight from the Makefile (Snakefile). So now instead of writing a command-line interface for each of my Python scripts, I can simply import the script in the Snakefile and call a function directly.

Eg.

 rule make_plot: input: data = "{name}.txt" output: plot = "{name}.png" run: import my_package my_package.plot(input['data'], output['plot'], name = wildcards['name'])
Another great feature is its integration with cluster engines like SGD/LSF, which means it can automatically submit jobs to the cluster instead of running them locally.

20
flukus 2 days ago 2 replies      
Personal blog spam, I learned make recently too and discovered it was good for high level languages as well, here is an example of building a c# project: http://flukus.github.io/rediscovering-make.html .

Now the blog itself is built with make: http://flukus.github.io/building-a-blog-engine.html

21
rcarmo 1 day ago 1 reply      
These days, most of my projects have a Makefile with four or five simple commands that _just work_ regardless of the language, runtime or operating system in use:

- make deps to setup/update dependencies

- make serve to start a local server

- make test to run automated tests

- make deploy to package/push to production

- make clean to remove previously built containers/binaries/whatever

There are usually a bunch of other more specific commands or targets (like dynamically defined targets to, say, scale-frontends-5 and other trickery), but this way I can switch to any project and get it running without bothering to lookup the npm/lein/Python incantation du jour.

Having sane, overridable (?=) defaults for environment variables is also great, and makes it very easy to do stuff like FOOBAR=/opt/scratch make serve for one-offs.

Dependency management is a much deeper and broader topic, but the usefulness of Makefiles to act as a living document of how to actually run your stuff (including documenting environment settings and build steps) shouldn't be ignored.

(Edit: mention defaults)

22
rcthompson 2 days ago 0 replies      
For people who are more comfortable in Python, I highly recommend Snakemake[1]. I use it for both big stuff like automating data analysis workflows and small stuff like building my Resume PDF from LyX source.

[1]: https://snakemake.readthedocs.io/en/stable/

23
Joky 2 days ago 0 replies      
Make is fine for simple cases, but I'm working on a project that is based on buildroot right now, and it is kind of a nightmare: make just does not provide any good way at this scale to keep track of what's going on and inspect / understand what goes wrong. Especially in the context of a highly parallel build with some dependencies are gonna get missing.

In general also all the implicit it has makes it hard to predict what can happen. Again when you scale to support a project that would be 1) large and 2) wouldn't have a regular structure.

On another smaller scale: doing an incremental build of LLVM is a lot faster with Ninja compared to Make (crake-generated).

Make is great: just don't use it where it is not the best fit.

24
gtramont 1 day ago 0 replies      
Here's some tips I like to follow whenever writing Makefiles (I find them joyful to write): http://clarkgrubb.com/makefile-style-guide
25
vacri 2 days ago 1 reply      
One very important thing missing from this primer is that Make targets are not 'mini-scripts', even though they look like it. Every line is 'its own script' in its own subshell - state is not passed between lines.

Make is scary because it's arcane and contains a lot of gotcha rules. I avoided learning Make for a long time. I'm glad I did learn it in the end, though I wouldn't call myself properly fluent in it yet. But there are a ton of gotchas and historical artifacts in Make.

26
rileytg 2 days ago 1 reply      
wow i've been feeling like not knowing make has been a major weakness of mine, this article has finally tied all my learning together. i feel totally capable of using make now. thank you.
27
mauvehaus 2 days ago 3 replies      
Has anybody successfully used make to build java code? I realize there are any number of other options (ant, maven, and gradle arguably being the most popular).

In fact, I realize that the whole idea of using make is probably outright foolish owing to the intertwined nature of the classpath (which expresses runtime dependencies) and compile-time dependencies (which may not be available in compiled form on the classpath) in Java. I'm merely curious if it can be done.

28
zwischenzug 1 day ago 1 reply      
This is great, and needs saying.

Recently I wrote a similar blog about an alternative app pattern that uses makefiles:

https://zwischenzugs.wordpress.com/2017/08/07/a-non-cloud-se...

29
fiatjaf 2 days ago 0 replies      
Makefiles are simple, but 99% of the existing Makefiles are computer-generated incomprehensible blobs. I don't want that.
30
user5994461 1 day ago 1 reply      
>>> Congratulations! You've learned 90% of what you need to know about

The next 90% will be to learn that Make breaks when having tabs and spaces in the same file, and your developers all use slightly different editors that will mix them up all the time.

31
leastangle 1 day ago 3 replies      
I did not know people are afraid of Makefiles. Maybe a nave question, but what is so scary about make?
32
systemz 1 day ago 0 replies      
Instead of makefile I can recommend Taskfile https://hackernoon.com/introducing-the-taskfile-5ddfe7ed83bd

Simple to use without any magic.

33
quantos 1 day ago 0 replies      
I had written Non Recursive Makefile Boilerplate (nrmb) for C, which should work in large projects with recursive directory structure. There is no need to manually add source file names in makefile, it automaically do this. One makefile compiles it all. Of course, it isn't perfect but it does the job and you can modify it for your project. Here is the link

https://github.com/quantos-unios/nrmb

Have a look :)

34
knowsuchagency 1 day ago 0 replies      
Make is fine, but I think we have better tools nowadays to do the same things.

Even though it may not have been originally intended as such, I've found Fabric http://docs.fabfile.org/en/1.13/tutorial.html to be far far more powerful and intuitive as a means of creating CLI's (that you can easily parametrize and test) around common tasks such as building software.

35
athenot 1 day ago 0 replies      
After using the various javascript build processes, I went back to good old makefiles and the result is way simpler. I have a target to build the final project with optimizations and a target to build a live-reload version of the project, that watches for changes on disk and rebuilds the parts as needed (thanks to watchify).

This works in my cases because I have browserify doing all the heavy lifting with respect to dependency management.

36
erAck 1 day ago 0 replies      
Take a look at the LibreOffice gbuild system, completely written in GNU make "language". And then come back saying you're not afraid of make ;-)

Still, it probably would be much harder, if possible at all (doubted for most), to achieve the same with any other tool mentioned here.

37
elnygren 1 day ago 0 replies      
I almost always roll a basic Makefile for even simple web projects. PHONY commands like "make run" and "make test" in every project make context switching a bit more easier.

While things like "npm start" are nice, not all projects are Node.js. In my current startup we're gonna have standardised Makefiles in each project so its easy to build, test, run, install any microservice locally :)

38
ojosilva 1 day ago 0 replies      
Opinion poll. I'm writing a little automation language in YAML and I was wondering if people prefer a dependency graph concept where tasks run parallel by default, unless stated as dependency, or a sequential set of instructions where tasks only run in parallel if explicitly "forked".

I'd say people would lean towards the former, but time and real world experience has shown that sequential dominates everything else.

39
bitwize 2 days ago 1 reply      
Or just use cmake and save yourself time, effort, and pain.
41
brian-armstrong 1 day ago 3 replies      
Using Cmake is so much nicer than make, and it's deeply cross-platform. Cmake makes cross-compiling really easy, while with make you have to be careful and preserve flags correctly. Much nicer to just include a cmake module that sets up everything for you. Plus it can generate xcode and visual studio configs for you. Doing make by hand just seems unncessary.
42
Crontab 1 day ago 0 replies      
I haven't ever had to mess with Makefiles, as I don't program anything other the basic shell scripts, but I do remember reading articles in the past that indicated Makefiles can be used for more than just programming.

For example: using Makefiles to automate static webpage creation and image file conversion.

43
JelteF 1 day ago 0 replies      
> Add an @ sign to suppress output of the command that is executed.

This is the exact opposite. It supresses echoing the command that is being executed. It's output is still shown like normal.

44
anilakar 1 day ago 0 replies      
Who needs makefiles when you have a build system, you might ask.

The truth: go see the configuration of a Jenkins project, and there's a high chance that one of the lines there still says "make".

45
danso 2 days ago 0 replies      
Worth pointing to Mike Bostock's essay, Why Use Make: https://bost.ocks.org/mike/make/
46
analognoise 1 day ago 0 replies      
Delphi/Pascal haven't used make files for...I don't know how long.

Shout out to FreePascal/Lazarus yet again!

47
mycat 1 day ago 0 replies      
So what is npm install of C/C++ ?
48
devdoomari 2 days ago 1 reply      
so I guess makefile is somewhat like gulp JS... but can I split a makefile into multiple files?
10
Why the Brain Needs More Downtime (2013) scientificamerican.com
409 points by tim_sw  2 days ago   100 comments top 9
1
laydn 1 day ago 12 replies      
I've been noticing that I'm more tired and need more downtime in days where I make, (or forced to make), critical decisions.

If I start the day by knowing what to do, then I don't really feel the burnout. For example, if I'm designing either a piece of hardware or firmware, and I know how to tackle the problem and it is just the matter of implementing it, I can code/design for 10 hours straight and when the workday ends, I still feel full of energy.

However, if the day is full of "decisions" (engineering or managerial), at the end of the day, I feel exhausted (and irritable, according to my family)

2
jmcgough 1 day ago 4 replies      
I find that I struggle with offices... you're stuck there for 8+ hours (even if you don't work that way, you need to create an impression), but after several hour of intense focus and the noise and chaos of an open office, I can feel drained and anxious. Some days I'll walk to a nearby park with wifi after work, meditate for a short bit, and then code from there. My focus and creativity comes right back after a bit of downtime in a relaxing space.
3
hasenj 1 day ago 8 replies      
I've always had a hard time sleeping/waking on time. What you might call a "night owl".

I'm starting to notice that on weekdays I actually perform better with 6 hours of sleep rather than 8 or 9. Then on the weekend I would "sleep in" to make up for the lost sleep time.

For some reason, if I sleep for 8 or 9 hours, I wake up feeling like I don't want to do anything. I don't feel sluggish or anything. I just feel "satisfied". Like there's nothing to be done. I can just "be". I can't bring myself to focus on any specific task. Nothing feels urgent.

When I sleep 6 hours, somehow I can focus more.

This is combined with not consuming caffeine. If I drink coffee after I have slept only for 6 hours, it makes me tired and sluggish.

4
ihateneckbeards 1 day ago 1 reply      
I noticed I can be intensely focused for about 4 to 6 hours max, after that I'll be "washed out" and I become error prone for complicated tasks

Unfortunately the 9 hour in office format constrain me to stay on my seat, so I'll try work on easier things at that time while beeing quite unproductive

How to we bring this fact to companies? It seems only the most ""progressive"" companies like Facebook or Google really understood this

5
dodorex 1 day ago 2 replies      
"Some researchers have proposed that people are also physiologically inclined to snooze during a 2 P.M. to 4 P.M. nap zoneor what some might call the afternoon slumpbecause the brain prefers to toggle between sleep and wake more than once a day."

Anecdotally, Thomas Edison was said to sleep only 3-4 hours a night and take frequent (very frequent) naps throughout the day.

https://www.brainpickings.org/2013/02/11/thomas-edison-on-sl...

6
danreed07 1 day ago 1 reply      
I'm ambivalent about this. I have a friend whose a Harvard math major, I've seen him work. He sleeps late and wakes early; when we work together, he always messes up my schedule by calling me in the middle of the night. I'm all tired and groggy the next day, and he's totally fine.

I think some people just inherently have more energy than others.

7
uptownfunk 1 day ago 3 replies      
I think I get a good six hours of actual work in the office. And then I need to check out and take a shower. Something about that after work shower just brings my focus and clarity right back. But if I have to crank with my team for a 12-15 hour day, after max 8 hours, we're all just physically there, but mentally have checked out long before that.

On sleep, 5-6 hours is optimal for me. Too much can be bad, I feel groggy and have brain-fog the rest of the day. I can get by on fewer for one day, but more than that and it becomes painful. I think a lot of this also has to do with lifestyle. How often and when do you eat, have sex, get sunlight, drink water, go out doors, etc. Many levels can be played with here.

Would be interested in hearing any hacks for getting by on less sleep.

8
qaq 1 day ago 0 replies      
Best option I experienced was working remotely from PST on EST schedule. So start at 6am done at 3 eat + have a drink take 1 hour nap and you have 8 hours which after nap fills like a whole new day.
9
nisa 1 day ago 0 replies      
I'm having a hard time organising and especially switching tasks and getting meaningful work done when multiple things that are unrelated fall together. Having a single thing do to and beeing able to just leave work would be great but at the moment I'm freelancing and having multiple jobs and doing sysadmin-style work, learning theory and programming in a new language really just kills me and I'm not getting much done. Once I get traction in a certain task it's okay but the constant switching is killing me.
11
Blood Test That Spots Tumor-Derived DNA in Early-Stage Cancers hopkinsmedicine.org
353 points by ncw96  2 days ago   62 comments top 11
1
gourneau 2 days ago 4 replies      
I work for another player Guardant Health. We are the Liquid Biopsy market leaders right now. We just raised $360M Series E from SoftBank.

If you find this type of thing interesting and want to be part of it, we are hiring lots of folks. My team is looking for bioinformaticians, Python hackers, and machine learning people. Please reach out to me if you want to know more jgourneau@guardanthealth.com

2
AlexDilthey 2 days ago 0 replies      
All fair enough. The two big immediate challenges in the field are i) that the tumor-derived fraction of total cfDNA can be as low as 1:10000 (stage I) and ii) that it is difficult to make Illumina sequencing more accurate than 1 error in 1000 sequenced bases (in which case the 1:10000 signal is drowned out). This paper uses some clever statistical tricks to reduce Illimina sequencing error; one of these tricks is to leverage population information, i.e. the more samples you sequence the better your understanding of (non-cancer-associated) systematic errors. This follows a long tradition in statistical genetics of using multi-sample panels to improve analysis of individual samples. There are also biochemical approaches like SafeSeq or Duplex Sequencing to reduce sequencing error.

Not-so-obvious point #1 is that the presence of cancer-associated mutations in blood != cancer. You find cancer-associated mutations in the skin of older probands, and assumedly many of the sampling sites would never turn into melanomas. A more subtle point is that cfDNA is likely generated by dying cells, i.e. a weak cancer signature in blood might also be indicative of the immune system doing its job.

Point #2 is that it's not necessarily about individual mutations, which are, due to the signal-to-noise ratio alluded to above, difficult to pick up. One can also look at the total representation of certain genes in cfDNA (many cancers have gene amplifications or deletions, which are easier to pick up because they affect thousands of bases at the same time), and the positioning of individual sequenced molecules relative to the reference genome. It seems that these positions are correlated with gene activities (transcription) in the cells that the cfDNA comes from, and cancer cells have distinct patterns if gene activity.

3
conradev 2 days ago 1 reply      
There is also Freenome, which raised a $65m Series A to bring something similar to market:

> Last year, we raised $5.5 million to prove out the potential of this technology. Now, its time to make sure that its safe and ready for the broader population.

https://medium.com/freenome-stories/freenome-raises-65m-in-s...

4
McKayDavis 2 days ago 1 reply      
I haven't read the referenced study, but I'm sure this is using the same (or very similar) cell free DNA (cfDNA) sequencing techniques currently used clinically for Non Invasive Prenatal Testing (NIPT) to screen for genetic defects such as trisomy 21 (Down Syndrome).

NIPT is a non-invasive blood screening test that is quickly becoming the clinical standard of care. Many insurance companies now cover the entire cost of NIPT screening for for at-risk pregnancies (e.g. women of "Advanced Maternal Age" (35yo+)). The debate is moving to whether it should be utilized/covered for average-risk pregnancies as well.

[1] http://capsprenatal.com/about-nipt/

[2] https://www.genomeweb.com/molecular-diagnostics/aetna-wont-c...

5
hprotagonist 2 days ago 1 reply      
Slowly but surely. This isn't even close to a real diagnostic, but it's a hopeful proof of concept.

I really do wish detection studies would publish a ROC curve, though, or at least d'.

6
maddyboo 2 days ago 4 replies      
Possibly a silly question, but is it possible for a 'healthy' person who doesn't have any cancer risk factors to get a test like this done?
7
melling 2 days ago 3 replies      
According to Craig Venter, early detection is what we need to eliminate cancer:

https://youtu.be/iUqgTYbkHP8?t=15m37s

I guess most are treatable if caught early?

8
amitutk 2 days ago 3 replies      
Didn't Grail raise a billion dollars to do just this?
9
AlexCoventry 2 days ago 2 replies      
> They found none of the cancer-derived mutations among blood samples of 44 healthy individuals.

Is 98% specificity adequate for a cancer test?

10
ziggzagg 2 days ago 1 reply      
When this test has a near 100% success rate, how does it help the patients? Can it really prevent cancer?
11
jonathanjaeger 2 days ago 0 replies      
Tangent: I'm invested in a small cap stock, Sophoris Bio, that's in a P2B study for prostrate cancer with a drug developed out of Johns Hopkins called PRX302 (Topsalysin).

That and the article about blood tests shows there's a lot they're working on for noninvasive or minimally invasive procedures to help prevent cancer early on.

12
Facebook You are the Product lrb.co.uk
380 points by rditooait  3 days ago   286 comments top 9
1
notadoc 3 days ago 9 replies      
I stopped using Facebook years ago and I could not recommend it more. I found it to be mental pollution at best and and a total waste of time.

If you want to 'keep in touch' with people, call or text them. Make an effort to actually interact with the people who matter to you.

2
olympus 3 days ago 8 replies      
I'm here to fix some ignorance, since the source of the "you are the product" idea is not these books.

Metafilter user blue_beetle first put this idea online when he said "If you are not paying for it, you're not the customer; you're the product being sold" in response to the Digg revolt of 2010. The idea apparently existed for a few decades prior regarding TV advertising. I prefer to think blue_beetle was the one who brought it into the zeitgeist.

http://www.metafilter.com/95152/Userdriven-discontent

http://quoteinvestigator.com/2017/07/16/product/

Edit: Alex3917 posted a similar idea on HN on 6 May 2010, beating blue_beetle by a couple months. Gotta give credit where it's due: https://news.ycombinator.com/item?id=15030959

3
akeck 3 days ago 7 replies      
I wonder if, in the future, being able not to be on any social media will be an higher class privilege.
4
phatbyte 3 days ago 0 replies      
I dropped all my social networks in the beginning of the year. I did for two main reasons.

First, for privacy concerns. FB, specially was getting to creepy for me. I felt, every action I did was being analyzed and filtered, I felt like I was a lab rat. The fact that these companies know so much about us is pretty scary, I felt like I needed to regain my privacy, fight the system somehow.

Second reason was because, I wasn't getting anything substantial that could improve my life overall. All I saw was dumb-ass posts, ignorant comments, the passive aggressiveness, the "look at me doing this really mundane thing, but please like my picture so I can feel validated", etc... feels like a mouse-cat race to see which of us has a better life or something. I honestly feel bad for how much time I spent there when I could apply that time to learn new things.

After more than 6 months without FB, here's what I've learned:

- I still keep in touch with my closest friends, we chat on slack/iMessage every day. It's actually a good way to know who really misses you, during this time, only about 5% of my FB friends reached out to me through message or phone to ask how were things in life. The other 95%, I really don't even remember most of their names anymore. Just ask yourselves, why do we have to share so much of our lives with so many "friends"? I know we can filter, and create groups, etc.. but damn...do you really want to spend your life "managing" relationships, to see who sees what? I find that tiresome.

- I don't feel left out of anything, because I keep track of local events using other sources, I read news from faithful websites, and if I need to share anything I just use the old email or show face-to-face any pictures I need of my latest vacation from my phone without having to share anything with anyone.

- I gain more time, less stress, I don't feel overwhelmed to keep track of every social media update. I just don't care. If something important happens I will know it sooner or later.

- I no longer have this need to constantly keep posting photos of what I'm doing outdoors or whatever. I don't have the need to feel validated by anyone but myself.

- But most importantly, I regained my privacy, or at least my social footprint is bare none at this point. I'm using uBlock, Firefox, DuckDuckGo and other tools to keep trackers at bay.

I may never completely win this war, but at least my habits aren't being recorded and feed to any ML algorithm.

5
grwthckrmstr 2 days ago 1 reply      
I'm using Facebook to earn my "fuck you money".

The advertising tools are so powerful it is downright scary, the level of targeting one can do using it is just insane.

That's partly the reason why I stopped posting updates. After seeing the depth of the advertising tools.

I don't use Facebook for posting personal updates anymore but only to fuel my business. I realise that the only way I can "choose" to stay out of all these services that track and sell our identity to advertisers is if I have "fuck you money" (money is the currency you exchange for your limited time in order to survive in this world).

6
adrianlmm 3 days ago 2 replies      
I've been using Facebook for years, awesome tool, I'm in contact with friends, relatives and parters, it is awesome.
7
amrrs 3 days ago 0 replies      
Cal Newport has been saying things like Facebook and other SM are engineered to be addictive and we've constantly seeing Youths falling for it. Adam Alter made a similar comment that when we've got a proper regulation for substances, why not for something like social media?

Fb is not just making us another node in a vast network graph but also ensuring a worst boring grown-ups who can't do anything worthy but post an Fb post condemning something and feeling great about their social responsibility.

8
occultist_throw 3 days ago 2 replies      
Indeed.

Any service online, where you do not explicitly pay money for goods/services rendered, you can rest assured that you are paying with data or influence (advertisement).

HN is no different. They control the news, and how the news is displayed. They run the YCombinator venture capital fund. You do not pay them, but they control influence (advertising). I would expect different if I paid YC for news access... But I dont.

9
kristianc 3 days ago 0 replies      
> Whatever comes next will take us back to those two pillars of the company, growth and monetisation. Growth can only come from connecting new areas of the planet.

This is a questionable assertion. Giant tech companies like Oracle and IBM don't tend to expand in this way, they make acquisitions of smaller companies, and use them to enhance the platform capabilities of the larger product.

I'm sure Zuck will be delighted if the "bottom billion" do all sign up and use Facebook, but they're never going to be massively profitable accounts.

Imo the acquisitions of Instagram and WhatsApp show the way that Facebook will go - Instagram adds a new and lucrative ad format, a profitable user segment and a base for adding in ideas from other platforms, such as Snapchat. WhatsApp builds out Facebook's graph and can be mined for intel.

13
Opioid makers made payments to one in 12 U.S. doctors brown.edu
259 points by metheus  1 day ago   98 comments top 17
1
lr4444lr 1 day ago 2 replies      
Maybe it's because Americans just have this cognitive dissonance that their trusted doctor could be any less than 100% conscientious about their health, but we need to plainly face the fact that if members of the press were able to write exposs about drug makers' fudging the data about the addictiveness and effectiveness of their products, that doctors with their medical training and responsibility over actual people's lives should have proceeded with more caution and not written scripts mindlessly to get rid of every tiny pain patients had just because they kept asking for something. It's just unconscionable.

EDIT: this survey was also very damning: http://www.chicagotribune.com/news/local/breaking/ct-prescri...

2
elipsey 1 day ago 3 replies      
Reminds me of what Rostand said about murder: "Kill one man, and you are a murderer. Kill millions of men, and you are a conqueror. Kill them all, and you are a god."

Sell one oxycontin and you're drug dealer; sell a million and you're a C level.

3
lootsauce 1 day ago 0 replies      
I have two relatives that died from prescription opioid addiction and abuse and I don't think a few payments here and there is what motivates doctors to prescribe these drugs at a higher rate. Maybe it does maybe not. The fact is they are powerful drugs that can stop pain AND they make LOTS of money so they get pushed as the best option.

The thing that is in question in a doctors mind is, can I say this is the best option. Thats what the face-time with reps, meals, conferences etc are doing, giving the MD a perception that this is best practice. It's the professional cover to prescribe what everyone knows is a highly addictive and dangerous narcotic.

If the same kind of money were spent on informing, reminding and reminding again, face-time with addiction prevention advocates, conferences on the opioid epidemic, payments for speaking on alternatives to opioids for pain treatment, giving doctors the facts about these drugs, the addiction and death rates, the impact on families and communities of the inevitable proportion of people who will become addicted and of those who will die, it will be much much harder to say this is a best practice.

But even then doctors are pushed hard to deal with as many patients as possible. A quick answer that deals with the immediate problem is what the patient wants and its all the doc has time and support from the system to give. This situation lends itself to the potential for those who truly benefit, the makers of these drugs, to take advantage of the situation and push drugs they know will make people addicted leading to higher use and profits. Lost lives and destroyed families be damned.

4
ransom1538 1 day ago 2 replies      
Feel free to browse doctors' opioid counts here. I was able to match them to their actual profiles. Take into account their field, but, even with that the numbers are ridiculous. If you are in "Family Practice" and prescribe opioids 9167 times per year you probably have a very sore hand.

https://www.opendoctor.io/opioid/highest/

5
ams6110 1 day ago 3 replies      
"the average payment to physicians was $15, the top 1 percent of physicians reported receiving more than $2,600 annually in payments"

Neither is enough to sway most physicians IMO. This seems to me like trying to stir up a scandal where there really isn't one.

I did hear on the radio today that 90% of prescription opiates are sold in USA and Canada, with the bulk of that being the USA. Other countries treat pain more holistically.

6
gayprogrammer 1 day ago 0 replies      
>> Q: What connection might there be between drug-maker payments to physicians and the current opioid use epidemic?

The article is pure speculation. They did not correlate the payments made to doctors with the prescriptions those doctors made, nor even more broadly with national prescription rates.

This article just makes the implied assumption that doctors push pills onto patients. I don't discount that at one time doctors may have been incentivized to play it fast and loose with pain pills, but those days are LONG gone now.

I would like to see research on the population in terms of predisposition to addiction and susceptibility to chemical dependence.

7
11thEarlOfMar 1 day ago 1 reply      
I don't like the 'pigs at the trough' image of this type of report. There are almost certainly pigs, but there is much more to resolving it than just revoking some licenses or throwing some people in jail.

Standard practice in business of all types is to take clients out for a meal to talk business. Usually, the meal setting enables a different type of legitimate, sober interaction. Many types of business are conducted this way. Some companies have policies that limit the value of what a salesperson can share with a client, for example, Applied Materials limits the value of any type of entertainment by a vendor to $100. This is good corporate policy to inhibit undue influence by vendors.

But it is not 'a payment'.

Likewise, it is pretty easy to see that pharma would want a Dr. who is prescribing their medication and has a positive story to tell to speak at one of their seminars. The Dr. might say that his time is worth $x, and the Pharma needs to cover his travel expenses, and then he'd consent to presenting. In this case, any fees paid would be considered payment. The question is, how much is being paid and does that payment present undue influence. Many doctors are independent contractors and can choose to do this type of activity without a policy to override or limit the value of it. On the other hand, state medical boards which license physicians should have policies that limit all medical and pharmaceutical companies in how they can influence physicians.

8
liveoneggs 1 day ago 3 replies      
9
jasonkostempski 1 day ago 0 replies      
Are there any rules that if a doctor has such a deal, it must be clearly expressed to the patient verbally and in writing? I think that would help not only deter doctors for making the deal at risk of being viewed as untrustworthy but also help people who blindly trust their doctor to maybe think twice before accepting their solution. I don't think there's a fix for the patients that just want the drug, and as long as they're informed, consenting adults, it should be their prerogative.
10
esm 1 day ago 1 reply      
Payments may affect prescribing, but I think that system factors count for more than many people realize. By way of an example, imagine the following case, which is reasonably common at the outpatient medicine office I am rotating through:

A 46 yo M with diabetes, hypertension, a 30 pack year smoking history, and low back pain that has been treated with oxycodone ever since a failed back operation 1.5 years ago presents to your office for routine follow-up. It's 10am, the hospital allots 15 minutes for routine appointments, and your next patient is in the waiting room. You are his physician -- what do you prioritize?

Smoking, diabetes, and hypertension are a perfect storm for a heart attack in the next 10 years, so how much time do you want to spend optimizing antihypertensive meds and glucose control? You could talk to him about quitting smoking, which is pretty high-yield since it would lower his cardiovascular and cancer risk. On the other hand, he doesn't seem particularly motivated to quit right now.

You would like to see him exercise more and eat better, since his blood sugars are not too bad yet, and you might be able to spare him daily insulin injections. But, his back pain is so bad that walking is difficult and exercise is out of the question. Tylenol and ibuprofen only "take the edge off". Oxycodone is the one thing that seems to really help. He asks you to refill his prescription, especially because "the pain is so bad at night, I can't sleep without it".

His quality-of-life is already poor, and it would become miserable if you took away his opioid script without providing some other form of pain control. You believe that he might benefit from physical therapy and time. He is willing to try PT, but he is adamant that he will not be able to "do all of the stretches and stuff" without taking oxycodone beforehand.

You now have 7 minutes to come up with a plan he agrees on (you're there to help him, after all), put in your orders, and read up on the next patient. How do you want to allocate your time? What if you suggest cutting down on his oxycodone regimen and he pushes back?

I don't know if there is a good answer. But these situations happen all the time, and someone has to make a decision. Most doctors are normal people. The different backgrounds, personalities, willingness to engage in confrontation or teaching, and varying degrees of concern for public health vs. individual patient needs, etc. lead to a variety of approaches. In the end, I think that pharma payments have a marginal effect on most doctors who have families, bosses, insurance constraints, a full waiting room, and are faced with the patient above.

11
robmiller 1 day ago 1 reply      
There is an irony here that the US invaded Afghanistan, the world's largest opium exporter[1].

[1] https://en.wikipedia.org/wiki/Opium_production_in_Afghanista...

12
refurb 1 day ago 8 replies      
This should be kept in context. Let's say the manufacturer presented new data at a conference. During that presentation they provided lunch and refreshments. Everyone of those doctors that attended will now show up in the CMS database.

Do we think that a $15 lunch is going to influence a physician to over-prescribe a drug?

13
ddebernardy 1 day ago 1 reply      
Is this really news? John Oliver ran a piece on the topic and the industry's many other dubious practices over 2 years ago, and I'm quite sure he wasn't the first to try to raise awareness.

https://www.youtube.com/watch?v=YQZ2UeOTO3I

14
zeep 1 day ago 0 replies      
And they tell them that their patients suffer from "pseudo-addiction" and should get more of the drugs...
15
CodeWriter23 1 day ago 0 replies      
If it walks like a marketing program and quacks like a marketing program, guess what...
16
oleg123 1 day ago 1 reply      
bribes - or payments?
17
vkou 1 day ago 2 replies      
Not related to payments, but related to opioids:

My father broke his thumb a few weeks ago, while operating a woodchipper. After getting a cast, he went to see a specialist, who recommended that K-wires be surgically installed - small metal rods that go into his thumb, until it heals, at which point they will be pulled out.

He got local anesthetic, got the wires installed, and got sent home. Because he lives in Canada, they gave him nothing for the pain. Two days later, the pain died down, and he's now waiting for the bones to heal.

In America, I can't imagine that doctor would get many positive reviews from his patients, for not prescribing painkillers. Market forces would push him towards over-prescribing... And statistically, some of his patients will become addicted.

14
What next? graydon2.dreamwidth.org
370 points by yomritoyj  23 hours ago   120 comments top 24
1
fulafel 20 hours ago 8 replies      
Again my pet ignored language/compiler technology issue goes unmentioned: data layout optimizations.

Control flow and computation optimizations have enabled use of higher level abstractions with little or no performance penalty, but at the same time it's almost unheard of to automatically perform (or even facilitate) the data structure transformations that are daily bread and butter for programmers doing performance work. Things like AoS->SoA conversion, compressed object references, shrinking fields based on range analysis, flattening/dernormalizing data that is used together, converting cold struct members to indirect lookups, compiling different versions of the code for different call sites based on input data, etc.

It's baffling considering that everyone agrees memory access and cache footprint are the current primary perf bottlenecks, to the point that experts recommend considering on-die computation is free and counting only memory accesses in first-order performance approximations.

2
z1mm32m4n 20 hours ago 3 replies      
Grayson's very first answer to "what's next" is "ML modules," a language feature probably few people have experienced first hand. We're talking about ML-style modules here, which are quite precisely defined alongside a language (as opposed to a "module" as more commonly exists in a language, which is just a heap of somewhat related identifiers). ML modules can be found in the mainstream ML family languages (Standard ML, Ocaml) as well as some lesser known languages (1ML, Manticore, RAML, and many more).

It's really hard to do justice explaining how amazing modules are. They capture the essence of abstraction incredibly well, giving you plenty of expressive power (alongside an equally powerful type system). Importantly, they compose; you can write functions from modules to modules!

(This is even more impressive than you think: modules have runtime (dynamic) AND compile time (static) components. You've certainly written functions on runtime values before, and you may have even written functions on static types before. But have you written one function that operates on both a static and a dynamic thing at the same time? And what kind of power does this give you? Basically, creating abstractions is effortless.)

To learn more, I recommend you read Danny Gratzer's "A Crash Course on ML Modules"[1]. It's a good jumping off point. From there, try your hand at learning SML or Ocaml and tinker. ML modules are great!

[1]: https://jozefg.bitbucket.io/posts/2015-01-08-modules.html

3
Animats 20 hours ago 2 replies      
One big problem we're now backing into is having incompatible paradigms in the same language. Pure callback, like Javascript, is fine. Pure threading with locks is fine. But having async/await and blocking locks in the same program gets painful fast and leads to deadlocks. Especially if both systems don't understand each other's locking. (Go tries to get this right, with unified locking; Python doesn't.)

The same is true of functional programming. Pure functional is fine. Pure imperative is fine. Both in the same language get complicated. (Rust may have overdone it here.)

More elaborate type systems may not be helpful. We've been there in other contexts, with SOAP-type RPC and XML schemas, superseded by the more casual JSON.

Mechanisms for attaching software unit A to software unit B usually involve one being the master defining the interface and the other being the slave written to the interface. If A calls B and A defines the interface, A is a "framework". If B defines the interface, B is a "library" or "API". We don't know how to do this symmetrically, other than by much manually written glue code.

Doing user-defined work at compile time is still not going well. Generics and templates keep growing in complexity. Making templates Turing-complete didn't help.

4
borplk 14 hours ago 5 replies      
I'd say the elephant in the room is graduating beyond plaintext (projectional editor, model-based editor).

If you think about it so many of our problems are a direct result of representing software as a bunch of files and folders with plaintext.

Our "fancy" editors and "intellisense" only goes so far.

Language evolution is slowed down because syntax is fragile and parsing is hard.

A "software as data model" approach takes a lot of that away.

You can cut down so much boilerplate and noise because you can have certain behaviours and attributes of the software be hidden from immediate view or condensed down into a colour or an icon.

Plaintext forces you to have a visually distracting element in front of you for every little thing. So as a result you end up with obscure characters and generally noisy code.

If your software is always in a rich data model format your editor can show you different views of it depending on the context.

So how you view your software when you are in "debug mode" could be wildly different from how you view it in "documentation mode" or "development mode".

You can also pull things from arbitrarily places into a single view at will.

Thinking of software as "bunch of files stored in folders" comes with a lot baggage and a lot of assumptions. It inherently biases how you organise things. And it forces you to do things that are not always in your interest. For example you may be "forced" to break things into smaller pieces more than you would like because things get visually too distracting or the file gets too big.

All of that stuff are arbitrary side effects of this ancient view of software that will immediately go away as soon as you treat AND ALWAYS KEEP your software as a rich data model.

Hell all of the problems with parsing text and ambiguity in sytnax and so on will also disappear.

5
gavanwoolery 22 hours ago 2 replies      
I like to read about various problems in language design, as someone who is relatively naive to its deeper intricacies it really helps broaden my view. That said I have seen a trend towards adding various bells and whistles to languages without any sort of consideration as to whether it actually, in a measurable way, makes the language better.

The downside to adding an additional feature is that you are much more likely to introduce leaky abstraction (even things as minor as syntactical sugar). Your language has more "gotchas", a steeper learning curve, and a higher chance of getting things wrong or not understanding what is going on under the hood.

For this reason, I have always appreciated relatively simple homoiconic languages that are close-to-the-metal. That said, the universe of tools and build systems around these languages has been a growing pile of cruft and garbage for quite some time, for understandable reasons.

I envision the sweet spot lies at a super-simple system language with a tightly-knit and extensible metaprogramming layer on top of it, and a consistent method of accessing common hardware and I/O. Instant recompilation ("scripting") seamlessly tied to highly optimized compilation would be ideal while I am making a wishlist :)

6
mcguire 12 hours ago 2 replies      
[Aside: Why do I have the Whiley (http://whiley.org/about/overview/) link marked seen?]

I was mildly curious why Graydon didn't mention my current, mildly passionate affair, Pony (https://www.ponylang.org/), and its use of capabilities (and actors, and per-actor garbage collection, etc.). Then, I saw,

"I had some extended notes here about "less-mainstream paradigms" and/or "things I wouldn't even recommend pursuing", but on reflection, I think it's kinda a bummer to draw too much attention to them. So I'll just leave it at a short list: actors, software transactional memory, lazy evaluation, backtracking, memoizing, "graphical" and/or two-dimensional languages, and user-extensible syntax."

Which is mildly upsetting, given that Graydon is one of my spirit animals for programming languages.

On the other hand, his bit on ESC/dependent typing/verification tech. covers all my bases: "If you want to play in this space, you ought to study at least Sage, Stardust, Whiley, Frama-C, SPARK-2014, Dafny, F, ATS, Xanadu, Idris, Zombie-Trellys, Dependent Haskell, and Liquid Haskell."

So I'm mostly as happy as a pig in a blanket. (Specifically, take a look at Dafny (https://github.com/Microsoft/dafny) (probably the poster child for the verification approach) and Idris (https://www.idris-lang.org/) (voted most likely to be generally usable of the dependently typed languages).

7
carussell 22 hours ago 5 replies      
All this and handling overflow still doesn't make the list. Had it been the case that easy considerations for overflow were baked into C back then, we probably wouldn't be dealing with hardware where handling overflow is even more difficult than it would have been on the PDP-11. (On the PDP-11, overflow would have trapped.) At the very least, it would be the norm for compilers to emulate it whether there was efficient machine-level support or not. However, that didn't happen, and because of that, even Rust finds it acceptable to punt on overflow for performance reasons.
8
statictype 20 hours ago 1 reply      
So Graydon works at Apple on Swift?

Wasn't he the original designer of Rust and employed at Mozilla?

Surprised that this move completely went under my radar

9
dom96 10 hours ago 0 replies      
Interesting to see the mention of effect systems. However, I am disappointed that the Nim programming language wasn't mentioned. Perhaps Eff and Koka have effect systems that are far more extensive, but as a language that doesn't make effect systems its primary feature I think Nim stands out.

Here is some more info about Nim's effect system: https://nim-lang.org/docs/manual.html#effect-system

10
mcguire 13 hours ago 0 replies      
"Writing this makes me think it deserves a footnote / warning: if while reading these remarks, you feel that modules -- or anything else I'm going to mention here -- are a "simple thing" that's easy to get right, with obvious right answers, I'm going to suggest you're likely suffering some mixture of Stockholm syndrome induced by your current favourite language, Engineer syndrome, and/or DunningKruger effect. Literally thousands of extremely skilled people have spent their lives banging their heads against these problems, and every shipping system has Serious Issues they simply don't deal with right."

Amen!

11
rtpg 18 hours ago 2 replies      
The blurring of types and values as part of the static checking very much speaks to me.

I've been using Typescript a lot recently with union types, guards, and other tools. It's clear to me that the type system is very complex and powerful! But sometimes I would like to make assertions that are hard to express in the limited syntax of types. Haskell has similar issues when trying to do type-level programming.

Having ways to generate types dynamically and hook into typechecking to check properties more deeply would be super useful for a lot of web tools like ORMs.

12
simonebrunozzi 13 hours ago 1 reply      
I would have preferred a more informative HN title, instead of a semi-clickbaity "What next?", e.g.

"The next big step for compiled languages?"

13
bjz_ 18 hours ago 2 replies      
I would love to see some advancements into distributed, statically typed languages that can be run on across cluster, and that would support type-safe, rolling deployments. One would have to ensure that state could be migrated safely, and that messaging can still happen between the nodes of different versions. Similar to thinking about this 'temporal' dimension of code, it would be cool to see us push versioning and library upgrades further, perhaps supporting automatic migrations.
14
lazyant 11 hours ago 2 replies      
What would be a good book / website to learn the concepts & nomenclature in order to understand the advanced language discussions in HN like this one?
15
ehnto 21 hours ago 5 replies      
I know I am basically dangeling meat into lions den with this question; How has PHP7 done in regards to the Modules section or modularity he speaks of?

I am interested in genuine and objective replies of course.

(Yes your joke is probably very funny and I am sure it's a novel and exciting quip about the state of affairs in 2006 when wordpress was the flagship product)

16
msangi 21 hours ago 1 reply      
It's interesting that he doesn't want to draw too much attention to actors while they are prominent in Chris Lattner's manifesto for Swift [1]

[1] https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9...

17
hderms 22 hours ago 0 replies      
Fantastic article. This is the kind of stuff I go to Hacker News to read. Had never even heard of half of these conceptual leaps.
18
leeoniya 22 hours ago 3 replies      
it's interesting that Rust isn't mentioned once in his post. i wonder if he's disheartened with the direction his baby went.
19
ilaksh 22 hours ago 0 replies      
I think at some point we will get to projection editors being mainstream for programming, and eventually things that we normally consider user activities will be recognized as programming when they involve Turing complete configurability. This will be an offshoot of more projection editing.

I also think that eventually we may see a truly common semantic definitional layer that programming languages and operating systems can be built off of. It's just like the types of metastructures used as the basis for many platforms today, but with the idea of creating a truly Uber platform.

Another futuristic idea I had would be a VR projectional programming system where components would be plugged and configured in 3d.

Another idea might be to find a way to take the flexibility of advanced neural networks and make it a core feature of a programming language.

20
jancsika 15 hours ago 1 reply      
I'm surprised build time wasn't on the list.

Curious and can't find anything: what's the most complex golang program out there, and how long does it take to compile?

21
AstralStorm 23 hours ago 3 replies      
Extra credit for whoever implements logic proofs on concurrent applications.
22
platz 20 hours ago 2 replies      
whats wrong with software transactional memory?
23
baby 18 hours ago 0 replies      
Can someone edit the title to something clearer? Thanks!
24
rurban 16 hours ago 1 reply      
No type system improvements to support concurrency safety?
15
Docker Is Raising Funding at $1.3B Valuation bloomberg.com
305 points by moritzplassnig  1 day ago   264 comments top 21
1
bane 1 day ago 5 replies      
I feel like this is one of those valuations which makes sense contextually, but not based on any sort of business reality.

Docker reminds me a lot of the PKZIP utilities. For those who don't remember, back in the late 80s the PKZIP utilities became a kind of defacto standard on non Unixes for file compression and decompression. The creator of the utilities was a guy named Phil Katz who meant to make money off of the tools, but as was the fashion at the time released them as basically feature complete shareware.

Some people did register, and quite a few companies registered to maintain compliance so PKWare (the company) did make a bit of money, but most people didn't bother. Eventually the core functionality was simply built into modern Operating Systems and various compatible clones were released for everything under the sun.

Amazingly the company is still around (and even selling PKZIP!) https://www.pkware.com/pkzip

Katz turned out to be a tragic figure http://www.bbsdocumentary.com/library/CONTROVERSY/LAWSUITS/S...

But my point is, I know of many many (MANY) people using Docker in development and deployment and I know of nobody at all who's paying them money. I'm sure they exist, the make revenue from somewhere presumably, but they're basically just critical infrastructure at this point and just becoming an expected part of the OS, not a company.

2
new299 1 day ago 11 replies      
I'm so curious to understand how you pitch Docker at a 1.3BUSD valuation. With I assume a potential valuation of ~10BUSD to give the investors a decent exit?

Does anyone have an insight into this?

Looks like Github's last valuation was at 2BUSD. That also seems high, but I can understand this somewhat better as they have revenue, and seem to be much more widely used/accepted than Docker. In addition to that I can see how Github's social features are valuable, and how they might grow into other markets. I don't see this for Docker...

3
locusofself 1 day ago 2 replies      
I used docker for a while last year and attended Dockercon. I was really excited about it and thought it was going to solve many of my problems.

But with how complicated my stack is, it just didn't make sense to use ultimately. I loved the idea of it, but in the end good old virtual machines and configuration management can basically do most of the same stuff.

I guess if you want to pack your servers to the brim with processes and shave off whatever performance hit you get from KVM or XEN, I get it.

But the idea of the filesystem layers and immutable images just kindof turned to a nightmare for me when I asked myself "how the hell am I going to update/patch this thing"

Maybe I'm crazy, but after a lot of excitement it seemed more like an extra layer of tools to deal with more than anything.

4
foota 1 day ago 3 replies      
My first reaction was that I was surprised it wasn't higher.

My second reaction was incredulity at how ridiculous my first reaction was.

5
raiyu 1 day ago 0 replies      
Monetizing open source directly is a bit challenging because you end up stuck in the same service model as everyone else. Which is basically to sell various support contracts to the fortune 100-500.

Forking a project into a enterprise (paid for) version and limiting those features in the original open source version, creates tension in the community, and usually isn't a model that leads to success.

Converting an open source project directly into a paid for software or SaaS model is definitely the best route as it reduces head count and allows you to be a software company instead of a service company.

Perhaps best captured by Github warpping git with an interface and community and then directly selling a SaaS subscription and eventually an enterprise hosted version that is still delivered on a subscription basis just behind the corporate firewall.

Also of note is that Github didn't create git itself, and instead was done on the direct need that developers saw themselves, which means they thought what is the product I want, rather than, we built and maintain git, so let's do that and eventually monetize it.

6
ahallock 1 day ago 5 replies      
Docker still has a long way to go in terms of local development ergonomics. Recently, I finally had my chance to on board a bunch of new devs and have them create their local environment using Docker Compose (we're working on a pretty standard Rails application).

We were able to get the environments set up and the app running, but the networking is so slow to be pretty much unusable. Something is wrong with syncing the FS between docker and the host OS. We were using the latest Docker for Mac. If the out of the box experience is this bad, it's unsuitable for local development. I was actually embarrassed.

7
z3t4 1 day ago 9 replies      
I dont understand containers. First you go to through great pain sharing and reusing libraries. Then you make a copy of all the libraries and the rest of the system for each program !?
8
Steeeve 1 day ago 1 reply      
The guy who came up with chroot in the first place is kicking himself.
9
kev009 1 day ago 1 reply      
Do they actually have any significant revenue? I love developer tools companies, but there are several tools upstarts that have no proven business model. They look like really bad gambles in terms of VC investment, unless you can get in early enough to unload to other fools.
10
contingencies 1 day ago 1 reply      
I worked with LXC since 2009, then personally built a cloud provider agnostic workflow interface superior in scope to Docker in feature set[1] between about 2013-2014 as a side project to assist with my work (managing multi-DC, multi-jurisdiction, high security and availability infrastructure and CI/CD for a major cryptocurrency exchange). (Unfortunately I was not able to release that code because my employer wanted to keep it closed source, but the documentation[2] and early conception[3] has been online since early days.) I was also an early stage contributor to docker, providing security related issues and resolutions based upon my early LXC experience.

Based upon the above experience, I firmly believe that Docker could be rewritten by a small team of programmers (~1-3) within a few month timeframe.

[1] Docker has grown to add some of this now, but back then had none of it: multiple infrastructure providers (physical bare metal, external cloud providers, own cloud/cluster), normalized CI/CD workflow, pluggable FS layers (eg. use ZFS or LVM2 snapshots instead of AUFS - most development was done on ZFS), inter-service functional dependency, guaranteed-repeatable platform and service package builds (network fetches during package build process are cached)...

[2] http://stani.sh/walter/cims/

[3] http://stani.sh/walter/pfcts/

11
eldavido 1 day ago 0 replies      
I wish people would stop talking about valuation this way, emphasizing the bullshit headline valuation.

The reality is that (speculating), they probably issued a new class of stock, at $x/share, and that class of stock has all kinds of rights, provisions, protections, etc. that the others don't, and may or may not have any bearing whatsoever on what the other classes of shares are worth.

12
jdoliner 1 day ago 2 replies      
There's a couple of things in this article that I don't think are true. I don't think Ben Golub was a co-founder of Docker. Maybe he counts as a co-founder of Docker but not of Dotcloud? That seems a bit weird though. I also am pretty sure Docker's headquarters are in San Francisco, not Palo Alto.
13
StanislavPetrov 1 day ago 0 replies      
As someone who witnessed the 2000 tech bubble pop, I feel like Bill Murray in Groundhog's day, except unfortunately this time its not just tech. Its going to end very badly.
14
slim 1 day ago 2 replies      
Docker is funded by In-Q-Tel
15
frigen 18 hours ago 1 reply      
Unikernels are a much better solution to the problems that Docker solves.
16
jaequery 1 day ago 2 replies      
why are they called Software Maker?
17
throw2016 1 day ago 4 replies      
Docker generated value from the LXC project, aufs, overlay, btrfs and a ton of open source projects yet few people know about these projects, their authors and in the case of the extremely poorly marketed LXC project even what it is thanks to negative marketing by the Docker ecosystem hellbent on 'owning containers'.

Who is the author of aufs or overlayfs? Should these projects work with no recognition while VC funded companies with marketing funds swoop down and extract market value without giving anything back. How has Docker contributed back to all the projects it is critically dependent on?

This does not seem like a sustainable open source model. A lot of critical problems around containers exist in places like layers, the kernel and these will not get fixed by Docker but aufs or overlayfs and kernel subsystems but given most don't even know the authors of these projects how will this work?

There has been a lot of misleading marketing on Linux containers right from 2013 here on HN itself and one wishes there was more informed discussion that would correct some of this misinformation, which didn't happen.

18
elsonrodriguez 21 hours ago 0 replies      
That's a lot of money for a static compiler.
19
baq 1 day ago 0 replies      
they should file for an ICO, dockercoin or containotoken would sell like hotcakes. /s
20
alexnewman 1 day ago 0 replies      
It's time for the cash grabs before the economy implodes
21
gue5t 1 day ago 4 replies      
Imagine the "value" investors could make if they cordoned off every useful composition of Linux kernel features into a "product" like this.
16
Try Out Rust IDE Support in Visual Studio Code rust-lang.org
261 points by Rusky  1 day ago   80 comments top 12
1
modeless 1 day ago 5 replies      
I have been using this for a few weeks, as a newcomer to Rust. Although it has some issues, I would not try to develop Rust code without it. It is incredibly useful and works well enough for day-to-day use.

Some of the issues I've found:

* Code completion sometimes simply fails to work. For example, inside future handlers (probably because this involves a lot of type inference).

* When errors are detected only the first line of the compiler error message is accessible in the UI, often omitting critical information and making it impossible to diagnose the problem.

* It is often necessary to manually restart RLS, for example when your cargo.toml changes. It can take a very long time to restart if things need to be recompiled and there isn't much in the way of progress indication.

* This is more of a missing feature, but type inference is a huge part of Rust, and it's often difficult to know what type the type inference engine has chosen for parts of your code. There's no way to find out using RLS in VSCode that I've seen, or go to the definition of inferred types, etc.

Other issues I've seen as a newcomer to Rust:

* It's very easy to get multiple versions of the same dependency in your project by accident (when your dependencies have dependencies), and there is no compiler warning when you mix up traits coming from different versions of a crate. You just get impossible-seeming errors.

* Compiler speed is a big problem.

* The derive syntax is super clunky for such an essential part of the language. I think Rust guides should emphasize derive more, as it's unclear at first how essential it really is. Almost all of my types derive multiple traits.

* In general, Rust is hard. It requires a lot more thinking than e.g. Python or even C. As a result, my forward progress is much slower. The problems I have while coding in Rust don't even exist in other languages. I'm sure this will improve over time but I'm not sure it will ever get to the point where I feel as productive as I do in other languages.

2
sushisource 1 day ago 1 reply      
Rust is not only a fantastic language, but the level of community involvement from the devs is just completely unlike any other language I've seen in a very long time. That really makes me excited that it will be adopted in the industry over time and ideally replace some of the nightmare-level C++ code out there.
3
KenoFischer 1 day ago 2 replies      
I haven't had the chance to try the Rust language mode, but I've been using VS Code for all my julia development lately, and I'm pretty impressed. It's quite a nice editor. I avoided using it for a very long time because I thought it'd match Atom's slowness due to their shared Electron heritage. But for some reason VS Code feels a lot snappier. Not quite Sublime levels, but perfectly usable.
4
int_19h 1 day ago 2 replies      
It looks like code completion is extremely basic. I tried this:

 struct Point { x: i32, y: i32 } fn main() { println!("Hello, world!"); let pt = Point { x: 1, y: 2 }; println!("{} {}", pt.x, pt.y); let v = vec![ pt ]; let vpt = &v[0]; println!("{} {}", vpt.x, vpt.y); }
And I can't get any dot-completions on vpt (but I can on pt). Which is also kinda weird, because if I hover over vpt, it does know that it is a &Point...

Even more weird is that if I add a type declaration to "let vpt" (specifying the same type that would be inferred), then completion works.

That sounds like a really basic scenario... I mean, type inference for locals is pervasive in Rust.

5
6
CSDude 1 day ago 1 reply      
I tried it, it works really nice. If you are looking for an alternative, the Intellij IDEA works very well when you install Rust plugin.
7
RussianCow 1 day ago 2 replies      
Does this support macro expansion of any kind? I'm currently using the plugin for IntelliJ IDEA, and it works really well aside from completely lacking support for macros, which makes its type annotations and similar features nearly useless for the project I'm working on.
8
rl3 1 day ago 1 reply      
Anyone here writing Rust on Windows and using WSL (Windows Subsystem for Linux) in your workflow?

I've found using WSL's "bash -c" from my Rust project's working directory in Windows to be a rather elegant way to compile and run code for a Linux target.

Theoretically it should be possible to remote debug Linux binaries in WSL from an editor in Windows, but I haven't had time to explore this yet. Both GDB and LLDB have remote debugging functionality.

9
alkonaut 19 hours ago 0 replies      
The vscode rust support is progressing nicely and if vscode is already your go-to editor it's the obvious choice.

However if you just want to dip your toes and get going with rust with minimal fuss, I find IntelliJ community+Rust to be the best combo. Vscode+Rust is not as polished yet.

10
demarq 1 day ago 1 reply      
Autofix doesn't seem to trigger for me, would someone confirm it's not just me?

try this in your editor

 let x = 4; if x = 5 {}
it should figure out you wanted x == 5.

11
tomc1985 1 day ago 1 reply      
I would love Rust support in Visual Studio proper...
12
Dowwie 1 day ago 1 reply      
Is anyone working on support in Atom?
17
Towards a JavaScript Binary AST yoric.github.io
301 points by Yoric  1 day ago   200 comments top 32
1
nfriedly 1 day ago 6 replies      
To clarify how this is not related to WebAssembly, this is for code written in JavaScript, while WASM is for code written in other languages.

It's a fairly simple optimization - it's still JavaScript, just compressed and somewhat pre-parsed.

WASM doesn't currently have built-in garbage collection, so to use it to compress/speed up/whatever JavaScript, you would have to compile an entire JavaScript Virtual Machine into WASM, which is almost certainly going to be slower than just running regular JavaScript in the browser's built-in JS engine.

(This is true for the time being, anyway. WASM should eventually support GC at which point it might make sense to compile JS to WASM in some cases.)

2
cabaalis 1 day ago 12 replies      
So, compiled Javascript then? "We meet again, at last. The circle is now complete."

The more I see interpreted languages being compiled for speed purposes, and compiled languages being interpreted for ease-of-use purposes, desktop applications becoming subscription web applications (remember mainframe programs? ), and then web applications becoming desktop applications (electron) the more I realize that computing is closer to clothing fads than anything else. Can't wait to pickup some bellbottoms at my local target.

3
apaprocki 1 day ago 3 replies      
From an alternate "not the web" viewpoint, I am interested in this because we have a desktop application that bootstraps a lot of JS for each view inside the application. There is a non-insignificant chunk of this time spent in parsing and the existing methods that engines expose (V8 in this case) for snapshotting / caching are not ideal. Given the initial reported gains, this could significantly ratchet down the parsing portion of perceived load time and provide a nice boost for such desktop apps. When presented at TC39, many wanted to see a bit more robust / scientific benchmarks to show that the gains were really there.
4
le-mark 1 day ago 3 replies      
Here's some perspective for where this project is coming from:

> So, a joint team from Mozilla and Facebook decided to get started working on a novel mechanism that we believe can dramatically improve the speed at which an application can start executing its JavaScript: the Binary AST.

I really like the organization of the present article, the author really answered all the questions I had, in an orderly manner. I'll use this format as a template for my own writing. Thanks!

Personally, I don't see the appeal for such a thing, and seems unlikely all browsers would implement it. It will be interesting to see how it works out.

5
mannschott 1 day ago 1 reply      
This is reminiscent of the technique used by some versions of ETH Oberon to generate native code on module loading from a compressed encoding of the parse tree. Michael Franz described the technique as "Semantic-Dictionary Encoding":

SDE is a dense representation. It encodes syntactically correct source program by a succession of indices into a semantic dictionary, which in turn contains the information necessary for generating native code. The dictionary itself is not part of the SDE representation, but is constructed dynamically during the translation of a source program to SDE form, and reconstructed before (or during) the decoding process. This method bears some resemblance to commonly used data compression schemes.

See also "Code-Generation On-the-Fly: A Key to Portable Software" https://pdfs.semanticscholar.org/6acf/85e7e8eab7c9089ca1ff24...

This same technique also was used by JUICE, a short-lived browser plugin for running software written in Oberon in a browser. It was presented as an alternative to Java byte code that was both more compact and easier to generate reasonable native code for.

https://github.com/Spirit-of-Oberon/Juice/blob/master/JUICE....

I seem to recall that the particular implementation was quite tied to the intermediate representation of the OP2 family of Oberon compilers making backward compatibility in the face of changes to the compiler challenging and I recall a conversation with someone hacking on Oberon that indicated that he'd chosen to address (trans)portable code by the simple expedient of just compressing the source and shipping that across the wire as the Oberon compiler was very fast even when just compiling from source.

I'm guessing the hard parts are:(0) Support in enough browsers to make it worth using this format.(1) Coming up with a binary format that's actually significantly faster to parse than plain text. (SDE managed this.)(2) Designing the format to not be brittle in the face of change.

6
nine_k 1 day ago 1 reply      
This is what BASIC interpreters on 8-bit systems did from the very beginning. Some BASIC interpreters did not even allow you to type the keywords. Storing a trivially serialized binary form of the source code is a painfully obvious way to reduce RAM usage and improve execution speed. You can also trivially produce the human-readable source back.

It's of course not compilation (though parsing is the first thing a compiler would do, too). It's not generation of machine code, or VM bytecode. it's mere compression.

This is great news because you got to see the source if you want, likely nicely formatted. You can also get rid of the minifiers, and thus likely see reasonable variable names in the debugger.

7
onion2k 1 day ago 2 replies      
This is a really interesting project from a browser technology point of view. It makes me wonder how much code you'd need to be deploying to for this to be useful in a production environment. Admittedly I don't make particularly big applications but I've yet to see parsing the JS code as a problem, even when there's 20MB of libraries included.
8
iainmerrick 1 day ago 1 reply      
This article says "Wouldnt it be nice if we could just make the parser faster? Unfortunately, while JS parsers have improved considerably, we are long past the point of diminishing returns."

I'm gobsmacked that parsing is such a major part of the JS startup time, compared to compiling and optimizing the code. Parsing isn't slow! Or at least it shouldn't be. How many MBs of Javascript is Facebook shipping?

Does anyone have a link to some measurements? Time spent parsing versus compilation?

9
ryanong 1 day ago 2 replies      
This is some amazing progress, but reading this and hearing how difficult JavaScript is as a language to design around makes me wonder how many hours have we spent optimizing a language designed in 2 weeks and living with those consequences. I wish we could version our JavaScript within a tag somehow so we could slowly deprecate code. I guess that would mean though browsers would have to support two languages that would suck..... this really is unfortunately the path of least resistance.

(I understand I could use elm, cjs, emscriptem or any other transpirer but I was thinking of ours spent around improving the js vm.

10
vvanders 1 day ago 2 replies      
Lua has something very similar(bytecode vs AST) via luac for a long while now. We've used to to speed up parse times in the past and it helps a ton in that area.
11
nikita2206 1 day ago 0 replies      
In this thread: people not understanding the difference between byte code (representing code in the form of instructions) and AST.
12
s3th 1 day ago 3 replies      
i'm very skeptical about the benefits of a binary JavaScript AST. The claim is that a binary AST would save on JS parsing costs. however, JS parse time is not just tokenization. For many large apps, the bottleneck in parsing is instead in actually validating that the JS code is well-formed and does not contain early errors. The binary AST format proposes to skip this step [0] which is equivalent to wrapping function bodies with eval This would be a major semantic change to the language that should be decoupled from anything related to a binary format. So IMO proposal conflates tokenization with changing early error semantics. Im skeptical the former has any benefits and the later should be considered on its own terms.

Also, theres immense value in text formats over binary formats in general, especially for open, extendable web standards. Text formats are more easily extendable as the language evolves because they typically have some amount of redundancy built in. The W3C outlines the value here (https://www.w3.org/People/Bos/DesignGuide/implementability.h...). JS text format in general also means engines/interpreters/browsers are simpler to implement and therefore that JS code has better longevity.

Finally, although WebAssembly is a different beast and a different language, it provides an escape hatch for large apps (e.g. Facebook) to go to extreme lengths in the name of speed. We dont need complicate JavaScript with such a powerful mechanism already tuned to perfectly complement it.

[0]: https://github.com/syg/ecmascript-binary-ast/#-2-early-error...

13
d--b 1 day ago 3 replies      
I am puzzled by how an binary AST makes the code significantly smaller than a minified+gziped version.

A JavaScript expression such as:

var mystuff = blah + 45

Gets minified asvar a=b+45

And then what is costly in there is the "var " and character overhead which you'd hope would be much reduced by compression.

The AST would replace the keywords by binary tokens, but then would still contain function names and so on.

I mean I appreciate the effort that shipping an AST will cut an awful lot of parsing, but I don't understand why it would make such a difference in size.

Can someone comment?

14
svat 1 day ago 0 replies      
However this technology pans out, thank for a really well-written post. It is a model of clarity.

(And yet many people seem to have misunderstood: perhaps an example or a caricature of the binary representation might have helped make it concrete, though then there is the danger that people will start commenting about the quality of the example.)

15
c-smile 1 day ago 2 replies      
To be honest I (as an author of the Sciter [1]) do not expect too much gain from that.

Sciter contains source code to bytecode compiler. Those bytecodes can be stored to files and loaded bypassing compilation phase. There is not too much gain as JS alike grammar is pretty simple.

In principle original ECMA-262 grammar was so simple that you can parse it without need of AST - direct parser with one symbol lookahead that produces bytecodes is quite adequate.

JavaScript use cases require fast compilation anyway. As for source files as for eval() and alike cases like onclick="..." in markup.

[1] https://sciter.comAnd JS parsers used to be damn fast indeed, until introduction of arrow functions. Their syntax is what requires AST.

16
kyle-rb 1 day ago 1 reply      
The linked article somehow avoids ever stating the meaning of the acronym, and I had to Google it myself, so I imagine some other people might not know: AST stands for "abstract syntax tree".

https://en.wikipedia.org/wiki/Abstract_syntax_tree

17
mnarayan01 1 day ago 1 reply      
For those curious about how this would deal with Function.prototype.toSource, via https://github.com/syg/ecmascript-binary-ast#functionprototy...:

> This method would return something like "[sourceless code]".

18
Existenceblinks 1 day ago 0 replies      
These are random thought I just wrote on twitter in the morning(UTC+7):

"I kinda think that there were no front-end languages actually. It's kinda all about web platform & browsers can't do things out of the box."

"Graphic interface shouldn't execute program on its own rather than rendering string on _platform_ which won't bother more."

"This is partly why people delegate js rendering to server. At the end of the day all script should be just WebAssembly bytecodes sent down."

"Browser should act as physical rendering object like pure monitor screen. User shouldn't have to inspect photon or write photon generators."

"SPA or PWA is just that about network request reduction, and how much string wanted to send at a time & today http/2 can help that a lot."

"Project like Drab https://github.com/grych/drab 's been doing quite well to move computation back to "server" (opposite to self-service client)"

"WebAssembly compromise (to complement js) to implement the platform. JS api and WebAssembly should be merged or united."

"VirtualDom as if it is a right way should be built-in just like DOM get constructed from html _string_ from server. All JS works must die."

"That's how WebComponent went almost a half way of fulfilling web platform. It is unfortunate js has gone far, tools are actively building on"

"I'd end this now before some thought of individualism-ruining-the-platform take over. That's not gonna be something i'd like to write (now)"

-----

Not a complete version though. Kind of general speaking but I've been thinking in detail a bit. Then hours later I found this thread.

19
TazeTSchnitzel 1 day ago 0 replies      
It's really exciting that this would mean smaller files that parse faster, but also more readable!
20
mnemotronic 1 day ago 1 reply      
Yea! A whole new attack surface. A hacked AST file could cause memory corruption and other faults in the browser-side binary expander.
21
iamleppert 1 day ago 3 replies      
I'd like to see some real-world performance numbers when compared with gzip. The article is a little overzealous in its claims that simply don't add up.

My suspicion is it's going to be marginal and not worth the added complexity for what essential is a compression technique.

This project is a prime example of incorrect optimization. Developers should be focused on loading the correct amount of JavaScript that's needed by their application, not on trying to optimize their fat JavaScript bundles. It's so lazy engineering.

22
z3t4 1 day ago 0 replies      
I wish for something like evalUrl() to run code that has already been parsed "in the background" so a module loader can be implemented in userland. It would be great if scripts that are prefetched or http2 pushed could be parsed in parallel and not have to be reparsed when running eval.
23
kevinb7 1 day ago 1 reply      
Does anyone know the actual spec for this binary AST can be found? In particular I'm curious about the format of each node type.
24
malts 1 day ago 1 reply      
Yoric - the Binary AST size comparisons in the blog - was the original javascript already minified?
25
limeblack 1 day ago 1 reply      
Could the AST be made an extension of the language similar to how it works in Mathematica?
26
bigato 1 day ago 0 replies      
Trying to catch up with webassembly, huh?
27
jlebrech 1 day ago 1 reply      
with an AST you can visualise code in ways other than text, and also reformat code like in go-fmt.
28
megamindbrian 1 day ago 1 reply      
Can you work on webpack instead?
29
tolmasky 1 day ago 3 replies      
One of my main concerns with this proposal, is the increasing complexity of what was once a very accessible web platform. You have this ever increasing tooling knowledge you need to develop, and with something like this it would certainly increase as "fast JS" would require you to know what a compiler is. Sure, a good counterpoint is that it may be incremental knowledge you can pick up, but I still think a no-work make everything faster solution would be better.

I believe there exists such a no-work alternative to the first-run problem, which I attempted to explain on Twitter, but its not really the greatest platform to do so, so I'll attempt to do so again here. Basically, given a script tag:

 <script src = "abc.com/script.js" integrity="sha256-123"></script>
A browser, such as Chrome, would kick off two requests, one to abc.com/script.js, and another to cdn.chrome.com/sha256-123/abc.com/script.js. The second request is for a pre-compiled and cached version of the script (the binary ast). If it doesn't exist yet, the cdn itself will download it, compile it, and cache it. For everyone except the first person to ever load this script, the second request returns before the time it takes for the first to finish + parse. Basically, the FIRST person to ever see this script online, takes the hit for everyone, since it alerts the "compile server" of its existence, afterwards its cached forever and fast for every other visitor on the web (that uses chrome). (I have later expanded on this to have interesting security additions as well -- there's a way this can be done such that the browser does the first compile and saves an encrypted version on the chrome cdn, such that google never sees the initial script and only people with access to the initial script can decrypt it). To clarify, this solution addresses the exact same concerns as the binary AST issue. The pros to this approach in my opinion are:

1. No extra work on the side of the developer. All the benefits described in the above article are just free without any new tooling.

2. It might actually be FASTER than the above example, since cdn.chrome.com may be way faster than wherever the user is hosting their binary AST.

3. The cdn can initially use the same sort of binary AST as the "compile result", but this gives the browser flexibility to do a full compile to JIT code instead, allowing different browsers to test different levels of compiles to cache globally.

4. This would be an excellent way to generate lots of data before deciding to create another public facing technology people have to learn - real world results have proven to be hard to predict in JS performance.

5. Much less complex to do things like dynamically assembling scripts (like for dynamic loading of SPA pages) - since the user doesn't also have to put a binary ast compiler in their pipeline: you get binary-ification for free.

The main con is that it makes browser development even harder to break into, since if this is done right it would be a large competitive advantage and requires a browser vendor to now host a cdn essentially. I don't think this is that big a deal given how hard it already is to get a new browser out there, and the advantages from getting browsers to compete on compile targets makes up for it in my opinion.

30
agumonkey 1 day ago 0 replies      
hehe, reminds me of emacs byte-compilation..
31
Laaas 1 day ago 0 replies      
Why does this guy use bits instead of bytes everywhere?
32
FrancoisBosun 1 day ago 1 reply      
I feel like this may become some kind of reimplementation of Java's byte code. We already have a "write once, run anywhere" system. Good luck!
18
Ask HN: What mistakes in your experience does management keep making?
437 points by oggyfredcake  3 days ago   373 comments top 15
1
Boothroid 2 days ago 8 replies      
* Zero career direction and zero technical speciality for devs

* Underestimation of difficulty whether through cynicism (burn the devs) or cluelessness

* Inadequate training and expectation devs can just piggy back learning technology x from scratch whilst writing production software using it

* Trying to use one off contracts as a way of building resellable products

* Insistence that all devs time must be billable and trying to defy gravity in ignoring skills rot etc. through lack of investment in training

* Expectation that devs can be swapped between technologies without problems

* Swapping people in and out of projects as if this will not affect progress

* Deliberate hoarding of information as a means of disempowering devs

All of this inevitably leads to a bunch of pissed off devs. The ones that are happy to eat it become the golden boys and get promotions. Those that point out the bullshit leave once they can and are replaced with the desperate at the bottom who sooner or later arrive at the same position of wanting to leave once they realise what's going on. I think tech can be pretty miserable if you are not in the upper echelon of lucky types that can score a position at a Google, Facebook etc.

Oh and a couple more:

* Give no feedback unless things go wrong

* Treat your highly educated, intelligent and motivated devs like children by misusing agile in order to micromanage them

2
jerf 2 days ago 6 replies      
I'll add one that even after 200 comments I don't see: Failure to explain the reason why. Coming down to their developer with a list of tasks without explaining why those tasks are the most important and will lead to company success.

You might think startups are small enough that this couldn't happen but that was actually where my worst experience was. The founders are visibly in a meeting with a couple people, maybe "suits", maybe not. They come out of the meeting and the next day your priorities are rewritten. Cool beans, that's a thing that can happen and that's not my issue. My issue is, why? What are the goals we are trying to hit now? What's the plan? Why is that better than the old plan?

This is especially important IMHO for more senior engineers responsible for architecture and stuff, because those matters can greatly affect the architecture. Telling me why lets me start getting a grasp on what parts of the code are long term and what can be considered a short term hack, what the scaling levels I need to shoot for, and all sorts of other things that are very hard to determine if you just come to me with "And actually, our customers need a new widget to frozzle the frobazz now more than they need to dopple the dipple now."

Not necessarily the biggest issue, there's a lot of other suggestions here that are probably bigger in most places, but this is one that has frustrated me.

(I'll also say this is one you may be able to help fix yourself, simply by asking. If you are in that senior role I think you pretty much have a professional obligation to ask, and I would not be shy about working that into the conversation one way or another.)

3
muzani 2 days ago 4 replies      
* Killing things that are low profit margins, under some misguided Pareto Principle approach. Sometimes these things are loss leaders designed to pull customers for other products.

* Spending too much on marketing/sales before people want the product. They usually just end up burning their brand if the product is too low quality.

* Too much focus on building multiple small features rather than focusing on the value proposition.

* Trying to negotiate deadlines for product development. "We don't have two months to finish this. Let's do this in one." In software estimation, there's the estimate, the target, and the commitment. If the commitment and estimate are far off, it should be questioned why, not negotiated.

* Hiring two mediocre developers at half the salary of one good developer. They usually can't solve problems past a certain treshhold.

* Importing tech talent, rather than promoting. Usually the people who have built the product have a better understanding of the tech stack than someone else they import.

* Startups that rely on low quality people to skimp on the budget. These people later form the DNA of the company and make it difficult to improve, if they're not the type who improve themselves.

4
stickfigure 3 days ago 25 replies      
I've never met a manager that wouldn't rather pay four average people $100/hr to solve a problem that one smart person could solve in half the time for $400/hr.

There seems to be some sort of quasi-religious belief in the fundamental averageness of humans; consequently the difference between developer salaries at any company varies by maybe 50%, whereas the productivity varies by at least a full order of magnitude.

Until "management" realizes this, the only way that a developer on the upper end of the productivity scale can capture their value is to found their own company. I sometimes wonder what would happen if some company simply offered to pay 3X the market rate and mercilessly filter the results.

5
lb1lf 2 days ago 6 replies      
Working for a company building heavy hardware, I see the following happen time and time again:

* Reorganizing seemingly for the sake of reorganizing. Result: Every time the new organization has settled somewhat and people know who to interact with to make things flow smoothly, everything is upended and back to square one.

* Trying to make our products buzzword compliant without understanding the consequences - we've on occasion been instructed to incorporate technologies which are hardly fit for purpose simply because 'everyone else is doing it' (Where 'everyone' is the companies featured in whatever magazine the CEO leafed through on his latest flight. Yes, I exaggerate a bit for effect.)

* Misguided cost savings; most of what hardware we use, we buy in small quantities - say, a few hundred items a year, maximum. Yet purchasing are constantly measured on whether they are able to source an 'equivalent' product at a lower price. Hence, we may find ourselves with a $20,000 unit being replaced by a $19,995 one - order quantity, 5/year - and spend $10,000 on engineering hours to update templates, redo interfaces &c.

* Assuming a man is a man is a man and that anyone is easily and quickly replaceable (except management, of course) - and not taking the time and productivity loss associated with training new colleagues into account.

Edit: An E-mail just landed in my inbox reminding me of another:

* Trying to quantify anything and everything, one focuses on the metrics which are easy to measure, rather than the ones which matter. As a result, the organization adapts and focuses on the metrics being measured, not the ones which matter - with foreseeable consequences for productivity.

6
JamesLeonis 3 days ago 3 replies      
Want to jump ahead a few years from Mythical Man-Month? Let me recommend Peopleware by Tom DeMarco and Tim Lister.[2] It's painful that we haven't crawled far out of the 80s practices.

The first chapter says: "The major problems of our work are not so much technological as sociological in nature." Sorry Google Memo Dude. DeMarco and Lister called it in the 80s.

Speaking of DeMarco, he also wrote a book about controlling software projects before Peopleware. Then in 2009 he denounced it. [1]

 To understand controls real role, you need to distinguish between two drastically different kinds of projects: * Project A will eventually cost about a million dollars and produce value of around $1.1 million. * Project B will eventually cost about a million dollars and produce value of more than $50 million. Whats immediately apparent is that control is really important for Project A but almost not at all important for Project B. This leads us to the odd conclusion that strict control is something that matters a lot on relatively useless projects and much less on useful projects. It suggests that the more you focus on control, the more likely youre working on a project thats striving to deliver something of relatively minor value.
I always think about that when I'm doing a Sprint Review.

[1]: https://www.computer.org/cms/Computer.org/ComputingNow/homep...[2]: https://en.wikipedia.org/wiki/Peopleware:_Productive_Project...

7
ChuckMcM 3 days ago 4 replies      
There are some very common ones;

* Building a one more generation of product than the market supports (so you build a new version when the market has moved on to something new).

* Rewarding productivity over quality.

* Managing to a second order effect. For example when Nestle' bought Dryers they managed to 'most profit per gallon' which rewarded people who substituted inferior (and cheaper) components, that lead to lower overall sales and that leads to lower overall revenue. Had they managed to overall revenue they might have caught the decline sooner.

* Creating environments where nobody trusts anyone else and so no one is honest. Leads to people not understanding the reality of a situation until the situation forces the disconnect into the mainstream.

* Rewarding popular popular employees differently than rank and file. Or generally unevenly enforcing or applying standards.

* Tolerating misbehavior out of fear of losing an employee. If I could fire anyone in management who said, "Yeah but if we call them on it they will quit! See what a bind that puts us in?" I believe the world would be a better place.

There are lots of things, that is why there are so many management books :-)

8
sulam 3 days ago 3 replies      
I have held management and non-management careers in roughly equal proportion over my career. My list would look like this:

1) believing you can dramatically change the performance of an employee -- it's very rare to save someone and less experienced managers always believe they can.

1.5) corollary to the above: not realizing the team is aware and waiting for you to fix the problem and won't thank you for taking longer to do what's necessary.

2) believing that people don't know what you're thinking -- people see you coming a mile off.

3) thinking you can wait to fix a compensation problem until the next comp review -- everyone waits too long on these.

4) believing HR when they tell you that you can't do something that's right for your team -- what they're really saying is that you have to go up the ladder until you find someone who can force them to make an exception.

5) not properly prioritizing the personal/social stuff -- at least this is my personal failing, and why ultimately management has not stuck for me.

6) believing your technical opinion matters -- I've seen way too many VP's making technical decisions that they are too far from the work to make, trust your team!

It'd be fun to see a list of these from the non-management point of view. I'd start off with the inverse of #6 above:

1) believing your technical opinion matters -- the business is what ultimately matters.

9
tboyd47 2 days ago 5 replies      
Trying to write code alongside their devs.

Here's what happens when a manager tries to fill tickets himself: his sense of control of the project is derived not from relationships of trust and cooperation with his reports, but from direct involvement in the code. So naturally, any challenging or critical piece of code ends up getting written by him (because otherwise, how could he be confident about it?)

The manager is essentially holding two jobs at once so they end up working late or being overly stressed at work.

The devs will feel intimidated to make architecture decisions, because they know if they do something their manager doesn't like, it will get refactored.

They will also feel as if they are only given the "grunt work" as all the challenging work is taken on by their manager.

The code itself is in a constant state of instability because there is a tension between the manager needing the other employees' help to get the code written on time, while also needing to have that complete control and mastery over the code that can only come from writing it yourself. So people's work gets overwritten continually.

This is very bad and it's very common - managers should learn to delegate as that is an essential part of their job. If they can't delegate they should remain as an individual contributor and not move into management.

10
ideonexus 2 days ago 3 replies      
The biggest recurring issue I have with my managers over the last twenty years is their need to add unnecessary complexity to projects. I think a good manager stays out of the way and just monitors employees for any obstructions that are preventing them from meeting their goals. Yet, my experience is that when a manager sits in on a project meeting, they can't help but start giving input on the project itself, adding complexity to defined business rules or adding obscure use cases to the system. Too many managers can't help but dominate meetings because their dominant personalities is how they became managers in the first place.

The worst is when you get two or more managers attending the same meeting. Then nothing will get done as they eat up all of the meeting time arguing about business rules, magnifying the complexity of the system until you end up with some Rube Goldberg chain of logic that they will completely forget minutes after they've left the meeting. A good manager knows to trust their employees and only intervenes to make sure those employees have the resources they need to do their jobs. The most effective managers are humble and respect the expertise of the experts they hire.

11
alexandercrohde 3 days ago 2 replies      
- Trying to "create a buzz" around the office, asking for a "sense of urgency," and other things that result in an illusion of productivity.

- Focusing on fixing problems, rather than preventing problems

- Acting as yes-men to bad upper-management strategy, thereby creating a layer of indirection between the people who think it's a good plan vs the engineers who can explain why it's not quite that easy

- Trying to use software tools (e.g. Jira's burndown charts) to quantitatively/"objectively" measure engineers

12
mychael 2 days ago 0 replies      
A few patterns I've seen:

* Preaching about the virtues of a flat organizational structure, but making unilateral decisions.

* Hiring people for a particular challenging job, but have them work on menial unchallenging tasks.

* Creating multiple layers of management for a tiny team.

* Facilitating post mortems that would be better facilitated by a neutral third party.

* Using vague management speak as a deliberate strategy to never be held responsible for anything.

* Rewarding politics with promotions.

* Marginalizing experienced employees.

* Talking too much about culture.

* Trying to be the company thought leader instead of helping people do their best work.

* Assuming that everyone underneath you views you as a career mentor.

* Negging employees.

* New hire managers: Firing incumbent employees after youve only been on the job for a few weeks.

* New hire managers: Not doing 1:1s with everyone who reports to you.

* New hire managers: Create sweeping changes like re-orgs after a few weeks on the job.

* New hire managers: Doing things a certain way because it worked well at a previous company.

* New hire managers: Changing office work hours to suit your personal life.

13
greenyoda 3 days ago 2 replies      
Promoting technical people with no management experience into management jobs, without providing them with any training or guidance. (Happened to me.) Writing code and managing people require very different sets of skills, and just because you're good at the former doesn't necessarily mean you'll be any good at the latter (or that you'll enjoy doing it).

(Similar problems can happen when a bunch of people with no management skills decide to found a company and start hiring people.)

14
redleggedfrog 3 days ago 1 reply      
The worst mistake I've seen management make over 20 years of software development is not listening to the technical people.

Estimates get shortened. Technical decisions are overruled for business or political reason. Warnings about undesirable outcomes are ignored. Sheer impossibility deemed surmountable.

I feel this is the worst mistake by management because the technical people are the ones who suffer for it. Overtime, inferior software, frustration, technical debt, lack of quality, are all things management doesn't really care about because they can always just push people harder to get what they want.

15
cbanek 3 days ago 2 replies      
Overly optimistic schedules. Even with a known gelled team, being constantly overscheduled is a nightmare. You cut corners, and are always stressed and tired. Other teams that believe the optimistic schedules may become angry or blocked on you. Over time this just leads to burnout, but since nobody seems to stay anywhere for very long, nobody seems to care.
19
YouTube admits 'wrong call' over deletion of Syrian war crime videos middleeasteye.net
237 points by jacobr  1 day ago   136 comments top 15
1
alexandercrohde 1 day ago 7 replies      
I think youtube needs to consider backing off regulating political content.

The fact is politics and morality are inherently intermingled. One can use words like extremist, but sometimes the extremists are the "correct" ones (like our founding fathers who orchestrated a revolution). How could any system consistently categorize "appropriate" videos without making moral judgements?

2
itaris 1 day ago 7 replies      
I'm much a proponent of automation as anyone else. But I think right now Google is trying to do something way too hard. By looking for "extremist" material, they are basically trying to determine the intention of a video. How can you expect an AI to do that?
3
molszanski 21 hours ago 1 reply      
Let's look at the bigger picture. First, in March some newspapers find an extremist video. It has ~14 views and YT advertising all over it. They make a a big deal out of it. As a result YouTube looses ad clients and tons of money.

Then, as a response, they make an alg. They don't want people to call them a "terrorist platform" ever again. Hence they take down the videos.

Now, this algorithm is hurting the bystanders. IMO the real problem is a public and business reaction to the initial event.

And this peace of news is an inevitable consequence.

4
jimmy2020 1 day ago 0 replies      
i really don't know how to describe my feeling as a syrian when i know the most important evidence that witnessed the regime crimes were deleted because of wrong call. And it's really confusing how artificial algorithm get confused between what is is obvious as isis propaganda and a family buried under the rubble and this statement makes things even worse. mistakenly? because there is so many videos? just imagine that may happen to any celebs channel. Will youtube issue the same statement? dont think so.
5
ezoe 1 day ago 0 replies      
What I don't like about those web giant services is, to get a human support, it requires to start social pressure like this.

If they fucked up something by automation, contacting to human support is hopeless unless you have very influential SNS status or something.

6
tdurden 1 day ago 2 replies      
Google/YouTube needs to admit defeat in this area and stop trying to censor, they are doing more harm than good.
7
balozi 1 day ago 2 replies      
Well, the AI did such a bangup job sorting out the mess in comment section that it got promoted to sorting out the videos themselves.
8
DINKDINK 1 day ago 0 replies      
What about all the speech that's censored that doesn't have enough interest or political clout to make people aware of the injustice of its censoring.
9
osteele 1 day ago 0 replies      
HN discussion of deletion event: https://news.ycombinator.com/item?id=14998429
10
williamle8300 1 day ago 0 replies      
Google (parent company of YouTube) already sees itself as the protector of the public's eyes and ears. They might be contrite now but they behave as a censorshipping organization.
11
RandVal30142 1 day ago 0 replies      
Something people need to keep in mind when parsing this story is that many of the effected channels were not about militancy, they were local media outlets. Local outlets that only gained historical note due to what they documented as it was unfolding.

In Syria outlets like Sham News Network have posted thousands upon thousands of clips. Everything from stories on civilian infrastructure under war, spots on mental health, live broadcasts of demonstrations.

Everything.

Including documenting attacks as they happen and after they have happened. Some of the effected accounts were ones that documented the regime's early chemical weapons attacks. These videos are literally cited in investigations.

All that is needed to get thousands upon thousands of hours of documentation going back half a decade deleted is three strikes.

Liveleak is not a good host for such outlets because it is not what these media outlets are about. Liveleak themselves delete content as well so even if the outlets fit the community it would not be a 'fix.'

12
miklax 21 hours ago 0 replies      
Bellingcat account should be removed, I agree on that with YT.
13
pgnas 1 day ago 1 reply      
YouTube (google) has become the EXACT opposite of what they said they were not going to do.

They are evil.

14
norea-armozel 1 day ago 1 reply      
I think YouTube really needs to hire more humans to review flagging of videos rather than leave it to a loose set of algorithms and swarming behavior of viewers. They assume wrongly that anyone who flags a video is honest. They should always assume the opposite and err on the side of caution. And this should also apply to any Content ID flagging. It should be the obligation of accusers to present evidence before taking content down.
15
762236 1 day ago 5 replies      
Automation is the only real solution. These types of conversations seem to always overlook how normal people don't want to watch such videos. Do you want to spend your day watching this stuff to grade them?
20
Wekan: An open-source Trello-like kanban wekan.github.io
297 points by mcone  2 days ago   92 comments top 11
1
tadfisher 2 days ago 10 replies      
If you want to do Kanban right, double down on making it possible to design actual Kanban workflows. Pretty ticket UI with checklists and GIFs must be secondary to this goal.

Things that most actual Kanban flows have that no one has built into a decent product[0]:

 - Nested columns in lanes - Rows for class-of-service - WIP limits (per lane, per column, and per class-of-service) - Sub-boards for meta issues
The actual content of each work item is the least important part of Kanban; it could be a hyperlink for all I care. Kanban is about managing the flow, not managing the work.

[0] Please prove me wrong if there is such a product out there!

2
bauerd 2 days ago 4 replies      
I thought for a second my touchpad just broke. Might want to make the landing page look less like there's content down the fold
3
nsebban 2 days ago 5 replies      
While I like the idea of having open source alternatives to the popular applications, this one is a pure and simple copy of Trello. This is a bit too much IMO.
4
tuukkah 2 days ago 0 replies      
Gitlab needs a better issue UI and perhaps this could be integrated.
5
Fej 1 day ago 4 replies      
Has anyone here had success with a personal kanban board?

Considering it for myself, even if it isn't the intended use case.

6
anderspitman 2 days ago 0 replies      
I think lack of an OSS alternative with a solid mobile app is the only thing keeping me on Trello at this point.
7
thinbeige 2 days ago 1 reply      
Trello got so mature, has a great API, is well integrated with Zapier and hundreds of other services AND is free (I still don't know why one should get into the paid plan, even witn bigger teams, the free version is torally fine) that it must be super hard for any clone or competitor to win users.
8
number6 2 days ago 3 replies      
Does it have Authentication yet? Last time I checked there were no users or administrations or any permissions
9
alinspired 2 days ago 2 replies      
what's the storage backed for this app ?

Also shout out to https://greggigon.github.io/my-personal-kanban/ that is a simple and offline board

10
onthetrain 2 days ago 1 reply      
Is it API-compatible with Trello? That would rock, being able to use Trello extensions.
11
yittg 1 day ago 2 replies      
what i only want to know is why a chinese-like name: kanban ^_^
21
A 2:15 Alarm, 2 Trains and a Bus Get Her to Work by 7 A.M nytimes.com
284 points by el_benhameen  2 days ago   300 comments top 15
1
ucaetano 2 days ago 7 replies      
San Francisco has more than twice the area of Manhattan, with half the population.

The San Francisco Bay Area CSA has about the same population as Switzerland, with 2/3 of the area of Switzerland.

Both have similar GDPs.

If someone wanted to live in Germany, and commute to work in Zurich, with Swiss salaries and German cost-of-living, their commute would be about 1h15min.

The housing and infrastructure problems in the SFBA are purely political, and self-inflicted.

2
nxsynonym 2 days ago 15 replies      
While the rising cost of housing is an easy target, why not put the pressure on the companies that are driving the influx of workers and out of control cost of living?

If tech companies are drawing people into cities and forcing out those who keep the city itself operating, why not have them subsidize and improve public transportation? Lower income housing? Encourage more remote work or move their headquarters out of the city centers? It seems crazy to me that people get driven out of their homes by real estate developers who re-develop due to tech-wages.

This example is a bit on the fringe, but it does illustrate the daily struggle of many normal people. 2+ hour commute is insane. And before someone comes in with the "why doesn't she get a new job closer to home?", you know it's not that easy - not to mention unfair to suggest that someone should change their entire life because their profession isn't flavor of the month.

3
jakelarkin 2 days ago 1 reply      
Reporters keep trying to find profiles of the housing crisis but this seems disingenuous. A lot of what this woman is doing is a choice; waking up hours before the train, huge house in Stockton vs condo/apartment closer. She makes $80k/year meaning post-tax $4600/month. She could easily afford a nice 1 or 2 bedroom in Pittsburg or Pleasanton for ~$2k a month, and her door-to-door commute would be well under 80 minutes.
4
__sha3d2 2 days ago 1 reply      
I was going to come in here and talk about how this strikes me more as a personal choice than a symptom of a systemic problem (i lived well in SF on $60k, and I mean for fucks sake Stockton? that is aggressively far. There are many great closer options.), but who cares.

This woman seems to have a really peaceful existence. It would be nice to have such relaxing routines in my own life, especially in the face of stressful realities like a long commute on public transit. It makes me want to develop the fortitude that this woman exercises every single day.

5
edward 2 days ago 2 replies      
I think these articles about extreme commutes are interesting. I recommend you read what Mr. Money Mustache has to say on the subject.

http://www.mrmoneymustache.com/2011/10/06/the-true-cost-of-c...

6
jdavis703 2 days ago 2 replies      
One way to make this better is to support the plan to inline the Capitol Corridor and ACE trains with Caltrain via the unused and abandoned Dumbarton rail bridge [0]. Make sure to tell your elected officials you support this.

[0]: http://www.greencaltrain.com/2017/08/dumbarton-corridor-stud...

7
bartart 2 days ago 1 reply      
It appears that when local cities have control over housing, they make decisions that are good for them, but bad for the state that needs higher paying jobs that generate tax revenue. Plus low density housing is bad for the environment when compared to high density: http://news.colgate.edu/scene/2014/11/urban-legends.html

Other places would move heaven and earth to have a place like Silicon Valley and it seems like California is shooting itself in the foot with this self inflicted housing shortage.

8
santaclaus 2 days ago 1 reply      
If a municipality decides to open 100 seats of office space, they should be required to zone and approve 100 beds for said workers to sleep in. Otherwise you have the situation where towns like Brisbane can build office complexes for the tax revenue and entirely pass the buck to their neighbors for the cost of housing the new workers.
9
peterjlee 2 days ago 1 reply      
>Ms. James pays $1,000 a month in rent for her three-bedroom house, compared with $1,600 for the one-bedroom apartment she had in Alameda.

She was forced out of her original apartment but unless she had a new situation that required her to have more bedrooms with a lower budget, some part of this extremity was her choice. I think NYT should've chosen a better example if the point they were trying to get across is "tech boom forcing workers out".

10
turtlebits 2 days ago 1 reply      
A little misleading as she needs to catch the train at 4am. Still, an almost 3 hour commute is brutal.
11
ChuckMcM 2 days ago 2 replies      
This is a pretty amazing example. It raises more questions than it answers however. The big one is "Why continue working in SF with this horrible commute?"

Market dynamics suggest that in a 'free' market, on an individual basis, a person seeks to maximize their value received. So in this case, if this was an open market, the (commute + $81K/year) > any other option that she might choose.

So what are we missing that there isn't at least an equal paying job available in the Central Valley that would cut her commute by 80 - 90% ? I can imagine lots of things that might contribute like pension eligibility, or specialization. But "Public Health Advisor for DHHS" seems to be a position that is available in many cities in the state. I would have liked to read what about this job in this city was so important.

And all of that is to the meta question of salary growth has been flat for a long time, but for a long time people felt they had to keep their job at all costs. A what point does the advantage change? 3% unemployment? 2%? What needs to happen so that people are confident enough to say "pay me more or I'll work somewhere else." ?

As a result of the stuff I wonder about, I feel like there might a tremendous amount of tension in the economy that isn't as visible as one might hope. And I wonder what happens when it snaps. Do we get the 10 - 15% stag-flation of the 70s?

12
deckar01 2 days ago 3 replies      
I paid about 30% of my salary in 2014 for a SRO [0] in San Francisco with a 10 minute commute. I decided that the quality of living was not worth the potential future earnings for staying in the bay area. In Tulsa I pay about 14% of my salary to rent a large apartment with a 4 minute commute.

[0]: https://en.wikipedia.org/wiki/Single_room_occupancy

13
ghomrassen 2 days ago 2 replies      
What's to stop San Francisco from creating a lot more high-density housing? Doesn't that solve many issues with housing supply? Looking at wikipedia, the density is crazy low even compared to other major cities in the US.
14
owenversteeg 2 days ago 1 reply      
Why isn't there a good, inexpensive bus? You can fit almost one hundred passengers on a single bus. Buses are pretty fuel-efficient per passenger, easy to reroute if you have more/less demand, and you can go directly to where you need to go. They scale very well - if there are only 20 people or so you can send a small bus, if there are thousands you can send multiple large buses.

Sure, traffic can be an issue, but I'd imagine train delays are roughly as big of a problem.

In this case, it looks like her 3hr 20min commute could become 1hr 30min with a bus that goes from Stockton.

Why has nobody done this?

15
PopsiclePete 2 days ago 5 replies      
So why do we keep commuting for jobs that don't really require our physical presence?

It was a long hard battle for me to be able to work from home, and yes, I sometimes do miss out on face-to-face interaction, but going to the office 2 instead of 5 days a week is still a huge win - I'm not in a car out there, making traffic worse for you. You're welcome.

Can we please solve the supposed "interaction" problem with some nice digital pens and white-boards and web cams and just .... work from anywhere?

22
GitHub CEO Chris Wanstrath To Step Down After Finding Replacement forbes.com
257 points by ahmedfromtunis  2 days ago   48 comments top 13
1
andygcook 2 days ago 5 replies      
Random story about Chris...

I saw him speak at a startup event in 2010 at MIT called Startup Bootcamp. It was probably my first startup-related conference and he was the first talk in the kick off slot at 9am. He gave a great talk recapping the origin of GitHub and how it grew out of another projected called FamSpam, a social network for families.

After the talk I had to run to the restroom and happened to run into Chris out in the entry way. I introduce myself and we starting chatting. As we were talking, people started walking into the event late. They saw us standing in the entrance, and started asking questions on where to go.

Instead of deferring responsibility to someone working at the event, Chris sat down at the empty welcome table and started checking people in by giving them schedules and helping them create name tags. We ended up checking in a few dozen people together while we talked more. No one knew who Chris was when they walked in, and just assumed he was a member of the event staff. I think had they known he was the co-founder of GitHub they probably would have paid more attention to him.

I ended up sending him a t-shirt and he took the time to shoot me back an email saying thanks. The subject line was "Dude" and the text was "Got the shirt. It's so awesome. You rock. Tell your brother yo, too!"

Anyways, I just thought it was kind of cool he took it upon himself to help out with checking people in at the event even though he had volunteered to travel all the way to Boston to speak for free to help out young, aspiring entrepreneurs by sharing his learnings. It always kind of stuck with me that you need to stay down to earth and pay it forward no matter how successful you get.

2
matt4077 1 day ago 0 replies      
Github is among the best things that ever happened to OSS. Compared to anything that came before, it is a pleasure to browse, it is intuitive, and it has managed to corral millions of people with vastly different backgrounds into a golden age of OSS productivity.

In the 10 years+ before Github, I never even tried to contribute codeeach project had its own workflow, and sending an email somehow felt intimidating. Today, spending an hour here or there to improve it slightly has almost become a guilty pleasure.

So, I guess what I'm saying is: Thank you!

3
forgingahead 2 days ago 2 replies      
Forbes is ad-infested hell, here is an Outline link:

https://outline.com/mntwGu

4
jdorfman 2 days ago 0 replies      
When I was at GitHub Satellite last year in Amsterdam, I saw Chris walk in to the venue and look around at the amazing production and smile. You could tell how proud he was of his team and the brand he helped create. I am glad to see he is staying with the company, I'm sure the new CEO will need his advice from time to time to keep GitHub great for the next 10 years.
5
DanHulton 2 days ago 0 replies      
Why the title change? As far as I can tell, it's factually incorrect, as well.

Wanstrath is planning on stepping down and hasn't stepped down yet.

6
geerlingguy 2 days ago 8 replies      
Any other way of viewing this story? On my iPad with Focus, I just get a blurred out screen when I visit Forbes.com now. I remember it used to show a 'please turn off your ad blocker' dismissible splash screen, but that seems to not be the case any more.
7
tdumitrescu 1 day ago 0 replies      
I've never met the guy but have a ton of respect for his work - his open source projects like Resque and pjax were awesome for their time. I imagine GitHub has benefited a lot from having real coders at the helm for so long.
8
nodesocket 1 day ago 0 replies      
> Wanstrath plans to focus on product strategy and the GitHub community after stepping down from the CEO role, working directly on products and meeting with customers.

Just a theory, perhaps they bringing in a new professional CEO for an IPO?

9
jbrooksuk 1 day ago 0 replies      
> GitHub may seek to become more of a marketplace that can help developers show off their work and take on additional projects, with GitHub taking a portion as a fee, says Sequoia investor Jim Goetz.

They already have a Marketplace offering.

10
grandalf 1 day ago 0 replies      
Chris is one of the few well-known developers who conveys a deep love of software engineering. Looking forward to reading some of the code he writes in the coming months.
11
ShirsenduK 1 day ago 0 replies      
The title is misleading. He plans to, but hasn't!
12
amgin3 1 day ago 1 reply      
13
PHP_THROW_AWAY1 2 days ago 1 reply      
Wrong title
23
Elixir in Depth Reading and personal notes digitalfreepen.com
333 points by rudi-c  2 days ago   142 comments top 15
1
randomstudent 2 days ago 4 replies      
The author doesn't talk a lot about preemptive scheduling, which is probably the best thing about the Erlang virtual machine. This video explains what preemptive scheduling is and why it is extremely useful: https://www.youtube.com/watch?v=5SbWapbXhKo

First, the speaker creates an application endpoint that runs into an infinite loop on invalid input. Then, he shows how that doesn't block the rest of the requests. Using native BEAM (EDIT: BEAM = EVM = Erlang Virtual Machine) tools, he looks for the misbehaving process (the one running the infinite loop), prints some debug information and kills it. It's pretty impressive.

Another great (and lighter) resource, is "Erlang the Movie" (https://www.youtube.com/watch?v=xrIjfIjssLE). It shows the power of concurrency through independent processes, the default debug tools, and the power of hot code reloading. Don't miss the twist at the end.

2
jesses 2 days ago 5 replies      
The author says it's "hard to use Elixir with services like Heroku because your instances wont find each other by default (the way theyre supposed to)".

I just wanted to mention https://gigalixir.com which is built to solve exactly this problem.

Disclaimer: I'm the founder.

3
acconrad 2 days ago 5 replies      
All of my pet projects are run on Elixir/Phoenix. If there is a language/framework to learn, this would be it. As approachable as Rails with the benefits of handling real time applications (chats are trivial with Channels) and faster runtimes than Rails (by orders of magnitude).

Happy to help anyone out if they're interested in learning!

4
sjtgraham 2 days ago 3 replies      
I don't think the Gartner hype cycle applies to Elixir, and I think that is largely because it is built on top of a very mature, production-tested platform (Erlang OTP). I have been using it in production for almost two years without issue on https://teller.io/, so if the GHC applies, it's very elongated!
5
brightball 2 days ago 1 reply      
The only part of that article I'd clarify is around deployment and devops best practices.

You can deploy Elixir exactly the same as any other language. In some cases, it just means making a decision that you don't need some of the extra features that are available like hot reloading...which every project doesn't need.

You can still use immutable images and take advantage of the built in distributed databases by using mounted block storage if need be.

You can use everything out there. Early on figuring out some of the port mappings and how to handle things was more difficult but as far as I've seen, those problems have mature solutions all around now.

6
palerdot 2 days ago 0 replies      
If someone is on the fence to make a jump to the elixir world I recommend elixir koans - https://github.com/elixirkoans/elixir-koans

I have not started looking into phoenix as I'm still exploring elixir, but I'm happy to have started learning elixir with koans along with official elixir guide.

7
esistgut 2 days ago 2 replies      
I don't like the debugging workflow described in the linked article "Debugging techniques in Elixir", it reminds me of DDD: a separate tool not integrated with my main development environment and requiring extra manual steps. I tested both the Jetbrains plugin and the vscode extension, both failed (unsupported versions, bugs, etc...).To elixir users: what do you think about the state of debugging tools? What is your workflow?
8
sergiotapia 2 days ago 0 replies      
Great write up! Guys, if you're on the fence about learning Elixir - dive on in!

You won't be disappointed and you'll be surprised how many times you'll want to reach out to auxiliary services and find out "Oh I can just use ETS/OTP/GenServer/spawn".

9
tschellenbach 2 days ago 1 reply      
Yes Elixir handles concurrency better than Ruby. In terms of raw performance it's nowhere near Go, Java, C++ though. Rails/Django are fast enough for almost all apps, if you need the additional performance improvements of a faster language you'd probably end up with one of those 3. Wonder how much need there is for a language that takes the middle ground in terms of performance. Looks very sexy though, really want to build something with it :)
10
innocentoldguy 2 days ago 1 reply      
I don't know why, because I don't think they are all that similar, but I'm often asked to defend my choice of using Elixir professionally vs. Go. From the article, this is one of the big reasons I chose Elixir over Go:

"Gos goroutines are neither memory-isolated nor are they guaranteed to yield after a certain amount of time. Certain types of library operations in Go (e.g. syscalls) will automatically yield the thread, but there are cases where a long-running computation could prevent yielding."

goroutines also take up about 10 times the memory.

11
bitwalker 2 days ago 0 replies      
I think the discussion around deployment may have been unnecessarily tainted by their experience using edeliver - it's an automation layer for building and deploying releases, but as mentioned it is configured via shell scripts and because it does a lot, a lot can go wrong.

The basic unit of Elixir (and Erlang for that matter) deployments is the release. A release is just a tarball containing the bytecode of the application, configuration files, private data files, and some shell scripts for booting the application. Deployment is literally extracting the tarball to where you want the application deployed, and running `bin/myapp start` from the root of that folder which starts a daemon running the application. There is a `foreground` task as well which works well for running in containers.

My last Elixir gig prior to my current one used Docker + Kubernetes and almost all of our applications were Elixir, Erlang, or Go. It was extremely painless to use with releases, and our containers were tiny because the release package contained everything it needed to run, so the OS basically just needed a shell, and the shared libraries needed by the runtime (e.g. crypto).

My current job, we're deploying a release via RPM, and again, releases play really nicely with packaging in this way, particularly since the boot script which comes with the release takes care of the major tasks (start, stop, restart, upgrade/downgrade).

There are pain points with releases, but once you are aware of them (and they are pretty clearly documented), it's not really something which affects you. For example, if you bundle the Erlang runtime system (ERTS) in a release, you must deploy to the same OS/architecture as the machine you built the release on, and that machine needs to have all of the shared libraries installed which ERTS will need. If you don't bundle ERTS, but use one installed on the target machine, it must be the same version used to compile your application, because the compiled bytecode is shipped in the release. Those two issues can definitely catch you if you just wing a deployment, but they are documented clearly to help prevent that.

In short, if there was pain experienced, I think it may have been due to the particular tool they used - I don't think deployment in Elixir is difficult, outdated, or painful, but you do have to understand the tools you are using and how to take advantage of them, and I'm not sure that's different from any other language really.

Disclaimer: I'm the creator/maintainer of Distillery, the underlying release management tooling for Elixir, so I am obviously biased, but I also suspect I have more experience deploying Elixir applications than a lot of people, so hopefully it's a wash and I can be objective enough to chime in here.

12
Exuma 2 days ago 1 reply      
This is a really good write up, thank you.
13
RobertoG 2 days ago 2 replies      
That was a nice article. Thanks.

I'm curious about the first table in the "Interop with other systems" part.

It seems to say that an Erlang deployment doesn't need Nginx or a HTTPServer, anybody knows how that works?

EDIT: I read the cited source (https://rossta.net/blog/why-i-am-betting-on-elixir.html) and it seems that is the case.

It looks too good to be true, yet. It would be nice, if somebody with Erlang deployment experience, could comment.

14
gfodor 2 days ago 0 replies      
One thing I didn't see covered that I'm currently trying to understand with Elixir is the relationship between process state and disk-backed state. (For example fetched via Ecto.) Does the role of a traditional RDBMS change in an elixir system? What are the durability guarantees of process state? Etc. Any real world experience would be super helpful to hear about.
15
brudgers 2 days ago 0 replies      
I can see why a person might choose Elixr over Ruby or vice versa. The tradeoffs between Elixr and Erlang are a lot less clear to me.
24
Mastodon is big in Japan, and the reason why is uncomfortable medium.com
257 points by keehun  15 hours ago   202 comments top 22
1
coldtea 14 hours ago 13 replies      
"Uncomfortable" as in "offends my American puritan-inspired sensibilities".

"Pardon him, Theodotus: he is a barbarian, and thinks that the customs of his tribe and island are the laws of nature". George Bernard Shaw, "Ceasar and Cleopatra".

(Slightly off topic: Feynman had a nice story in one of his books about how the main in a Japanese guesthouse he stayed walked in while he was naked and having a bath. She didn't flinch and just went on about her business like nothing had happened, and he was thinking what a fuss/embarrassment etc that would have caused if it happened in a hotel in the US -- when it's just an adult being naked with another adult present. It's not like everybody hasn't seen genitals before or it's a big deal.)

2
Animats 10 hours ago 2 replies      
At last, something that could potentially challenge Facebook's world domination. Somebody gets a federated social network running with a substantial user base, and it runs into this.

The US position on child pornography comes from the Meese Report during the Reagan administration.[1] The Reagan administration wanted to crack down on pornography in general to cater to the religious base. But they'd run into First Amendment problems and the courts wouldn't go along. So child pornography, which barely existed at the time, was made the justification for a crackdown. By creating ambiguous laws with severe penalties for child pornography and complex recordkeeping requirements, the plan was to make it too dangerous for adult pornography to be made commercially. But the industry adapted, filling out and filing the "2257 paperwork" as required.[2] After much litigation, things settled down, porn producers kept the required records, and DoJ stopped hassling them about this.

So that's how the US got here. That's why it's such a big deal legally in the US, rather than being a branch of child labor law. Japan doesn't have the same political history.

Federated systems are stuck with the basic problem of distributed online social systems:anonymity plus wide distribution empowers assholes. That's why Facebook has a "real name" policy - it keeps the jerk level down.

[1] https://babel.hathitrust.org/cgi/pt?id=mdp.39015058809065;vi...[2] https://en.wikipedia.org/wiki/Child_Protection_and_Obscenity...

3
rangibaby 14 hours ago 5 replies      
I have lived in Japan since I was quite young (late 20s now) and don't see what the problem with lolicon is. It's not my thing, but if someone enjoys it that's their business, they aren't hurting anyone. That's just my gut feeling on the matter, I'm interested in hearing others' thoughts.
4
kstrauser 11 hours ago 0 replies      
I own a Mastodon instance and love its federation options. For instance, I could decide to outright disconnect from that instance (in Mastodon speak, to "block" it) so that my users don't see it (and vice versa). I chose in this case to "silence" it, which means:

- My users can still talk to its users and see posts from people they follow.

- Posts from that instance don't show up on my "federated timeline" (which is a timeline of all posts made by my users and by the people they follow on other instances; great way to find new interesting people).

- I don't cache any media sent from that instance. The default is to cache images locally: if a user on a tiny instance has 10,000 followers on a busy one, the busy one don't make the tiny instance serve up 10,000 copies of every image.

So again, my users can talk to their users just like normal, but no one on my instance sees anything unless they specifically opt in to, and any content I dislike never travels through my network or gets stored on my server. I'm happy with that arrangement.

5
xg15 13 hours ago 2 replies      
I'm all for decentralized communication but I don't think the example of the article is particularly convincing and I wonder if the article is asking the right questions.

So the uncomfortable reason why Mastodon is so popular in japan is that Pixif operates a large Mastodon node which is used to share/discuss questionable images.

Discussions about lolicon aside, does any of this actually has something to do with the detail that Mastodon supports federation?

The article states that decentralisation is important to allow different rules for different communities. However if, e.g. if Pixif disabled federation or switched from Mastodon to something proprietary, would that change anything? Similarly, Reddit is highly centralized technically but - currently - provides freedom for each subreddit to define their own moderation rules (within the restrictions of Reddit, the company).

I feel there is a difference between the "decentralisation" when talking on the social or the technical layer and that difference should be kept in mind.

6
CurtMonash 13 hours ago 3 replies      
Images of all sorts of criminal acts are deemed acceptable, as long as no harm is done to actual individuals during those images' creation.

I've never seen why child porn should be a exception.

That I would think poorly of somebody for enjoying certain categories of child porn is beside the point.

7
jancsika 13 hours ago 1 reply      
> Its a constant struggle for Tor to recruit everyday users of Tor, who use the service to evade commercial surveillance.

That doesn't seem to be a struggle at all. All kinds of users leverage Tor for all kinds of reasons.

The struggle is to recruit everyday users who have the inclination, technical expertise, and rhetorical skill necessary to defend the technology against all kinds of fearmongering tactics.

There is a general lack of such people. If the same set of interests bent on defeating Tor set their sights on TCP, you can bet that technologists would be struggling to find ways to defend it that could resonate with the general public.

8
klondike_ 13 hours ago 0 replies      
This really shows the advantages to a federated social network. People have all sorts of sensibilities about what is acceptable content, and a one-size-fits-all moderation approach like on Twitter will never work for everybody.
9
emodendroket 14 hours ago 0 replies      
Lolicon can also refer to live action stuff where the model is of age but looks younger. Also, the rules on this stuff in the US are quite murky and vary by state, rather than being simply illegal across the board as this article wants to suggest.
10
SCdF 14 hours ago 2 replies      
On the topic of Mastodon, I wonder if the reason it hasn't caught on so much (outside of this use case) is precisely because it's federated.

When a new social network comes along, I often sign up ASAP just to try to grab SCdF, because I'm a human and vane. I will usually give it a bit of a crack once I've done that, but the need to squat my username is a big (and I realise, stupid) driver for me.

I've known about Mastodon for awhile now, but I don't feel any pressure to sign up and check it out because there is no danger of someone else taking my username. Worst case I could just host my own instance against my domain.

11
bryanlarsen 14 hours ago 0 replies      
Porn is too ubiquitous and accepted on the common web to really drive technologies the way it used to.

For example, bittorrent started with porn, but that's not what drove its growth or made it successful. If the credit card companies didn't allow porn transactions on their networks, bitcoin would probably be much larger today. Tor is a similar story, I assume.

12
codedokode 6 hours ago 0 replies      
> lolicon drawings are prohibited

> gory, bloody and violent pictures are allowed

They must have something wrong with their head.

13
nihonde 14 hours ago 0 replies      
Saying something with a few hundred thousand users is "big in Japan" is a stretch, at best. There are 130MM+ people in Japan.

I mean, I have an iOS app that has about that many MAU, and I consider it to be basically a failure.

14
SCdF 14 hours ago 2 replies      
The big surprise to me is that Deviant Art is supposed to be about photography!?
15
ygaf 10 hours ago 0 replies      
>Its a constant struggle for Tor to recruit everyday users of Tor, who use the service to evade commercial surveillance. Those users provide valuable cover traffic, making it harder to identify whistleblowers who use the service, and political air cover for those who would seek to ban the tool so they can combat child pornography and other illegal content.

Wait - I thought people weren't meant to use Tor (thus its bandwidth) if they didn't need it. Or are they recruiting not just any people, but those who will contrive to browse all day / not download heavily?

16
fundabulousrIII 14 hours ago 0 replies      
Thought they were talking about the band and the decibel level.
17
Eridrus 14 hours ago 0 replies      
Most people are on Twitter because of network effects.

Twitter made this a non-issue for lollicon users by banning them, but it's also interesting to note that it sprang up due to support from an existing website.

Most people (myself included) who are dissatisfied with aspects of Twitter are not motivated enough to try to fix them.

18
mirimir 13 hours ago 3 replies      
Well, it's not just pictures.

> After the enforcement, there will still be high school girls out there who are going to want to earn pocket money, and the men who target these girls wont disappear, either, said an official from the Metropolitan Police Department.

> The police come inside, so there are no more real JK girls at the shop. Most of the business is being arranged over the internet, through enko (compensated dating) services.

https://www.japantimes.co.jp/news/2017/07/06/national/crime-...

Global Internet morality is unworkable.

19
reustle 14 hours ago 2 replies      
I was expecting to read about the heavy metal band, Mastodon.
20
coldtea 14 hours ago 2 replies      
21
whipoodle 14 hours ago 1 reply      
Child porn and Nazi stuff have long been really bright lines in user content. Recent events have revealed more acceptance of Nazis and adjacent groups in our society than previously thought, so I guess I could see the taboo against child-porn easing up too. Very sad and scary.
22
amelius 15 hours ago 1 reply      
Sounds similar to the story of BetaMax versus VHS.

Edit: sorry for the brevity, pfooti below explains it well.

25
A big, successful trial of probiotics theatlantic.com
255 points by ValentineC  2 days ago   65 comments top 16
1
rubidium 2 days ago 3 replies      
Here's the nature article (paywall): http://www.nature.com/nature/journal/vaop/ncurrent/full/natu...

"The special mixture included a probiotic called Lactobacillus plantarum ATCC-202195 combined with fructo-oligosaccharide (FOS), an oral synbiotic preparation developed by Dr. Panigrahi." (from https://medicalxpress.com/news/2017-08-probiotics-sepsis-inf...)

Decrease in other infections (respiratory and nasal) is particularly interesting to see.

Now I want a HN MD to weigh in.

2
manmal 2 days ago 1 reply      
If you want to try L. Plantarum yourself, get the strain 299v if you can. There's one by Jarrow (not affiliated, just made good experiences with it). AFAIK, most other strains of L. Plantarum produce D-lactic acid, while 299v produces L-lactic acid (I've even seen a paper where they used 299v to treat acidosis). D-lactic acid can be hard to metabolize and can lead to acidosis. Probiotics-induced acidosis is a thing - not good, most people with this end up in hospital and need infusions and some kind of intervention for the long term.

Acidophilus generally produces D-lactic acid, while e.g. L casei shirota (Yakult) does not. It pays to take a hard look at the strains in a probiotic.

3
Ovah 2 days ago 1 reply      
I find the word probiotic to be inherently problematic. It includes any and all microorganisms that have a single positive health benefit, whether or not they're detrimental in other regards. That if a bacteria were to penerate gut epithelium (very bad), it would still be a probiotic if it reduced constupation.

Even if the research supporting the health benefits of a supplemental bacteria is sound, a single study always has a restricted scope.

It's saddening that 'probiotics' have found their way to products such as baby formula, while there is pretty much no regulation governing their sales and use (at least in Europe).

4
RcouF1uZ4gsC 2 days ago 2 replies      
The 9% placebo and even the 5% treatment sepsis rate seems very high. According to http://emedicine.medscape.com/article/978352-overview#a6 the US rate of neonatal sepsis is around 2/1000 live births so around 0.2%.

Given this, I wonder how applicable these results would be to neonates in the US.

5
culiuniversal 2 days ago 1 reply      
They initially planned to include 8000 babies, but stopped early. With only a week of treatment they already found a significant decrease in sepsis rates. They stopped because they thought it'd be highly unethical to deprive the other babies of a life-saving treatment. This is a heartwarming example of when humanity and science conflict, but I'm glad humanity won
6
amai 15 hours ago 0 replies      
I can recommend https://en.wikipedia.org/wiki/Lactobacillus_reuteri

"Similar results have been found in adults; those consuming L. reuteri daily end up falling ill 50% less often, as measured by their decrease use of sick leave."

7
Mz 2 days ago 1 reply      
Aside from preventing sepsis, it also reduced the risk of infections by both the major groups of bacteria: the Gram-positives, by 82 percent; and the Gram-negatives, which are harder to treat with antibiotics, by 75 percent. It even reduced the risk of pneumonia and other infections of the airways by 34 percent. That was completely unexpected, says Panigrahi, and its the result hes especially excited about. It suggests that the synbiotic isnt just acting within the gut, but also giving the infants immune systems a body-wide boost.

I don't know why they are so surprised. Given that some sources say the gut constitutes up to 70 percent of the immune system, it should be fairly obvious that improving gut health will have such effects.

Furthermore, you can infer a fairly direct relationship between gut health and lung health based on what happens in the body at altitude: You start urinating more to compensate for the thin air reducing your ability to exhale wastes. The body starts clearing them out of the blood by shunting them to the kidneys.

8
icelancer 2 days ago 0 replies      
The trial looks to be exceptionally rigorous and the sample size is very large. This is exciting science to say the least!
9
csr12928834 2 days ago 0 replies      
Interesting. I was skeptical of probiotics but the evidence seems to be suggesting otherwise.

This kind of builds on meta-analyses showing that probiotic use cuts rates of antibiotic-associated C. difficile infection.

Young children have very unstable microflora systems, so the results make sense from that perspective also.

10
mattparlane 2 days ago 2 replies      
Why the use of a placebo in a trial involving babies? Surely they would be immune to any placebo effect?
11
Havoc 2 days ago 3 replies      
My issue isn't on the faith in probiotics side...the problem is which ones do I buy.

It's near impossible to know you're getting the good stuff so to speak.

12
markdown 2 days ago 0 replies      
Does yoghurt have the good stuff?
13
TheBeardKing 2 days ago 1 reply      
The study was done on newborns just starting breastfeeding, but doesn't say whether only vaginal births or if C-sections were included.
14
colordrops 2 days ago 3 replies      
All the products containing Lactobacillus plantarum may see a boost in sales.
15
quickthrower2 1 day ago 0 replies      
16
nikolay 2 days ago 1 reply      
I've been using General Biotics 115-strain pre- and probiotic product called Equilibrium [1]. You can find more info in the Science section [2].

[1]: https://www.generalbiotics.com/orders/new/

[2]: https://www.generalbiotics.com/science/

26
BYTE Magazine's Lisp issue (1979) [pdf] archive.org
248 points by pmoriarty  3 days ago   151 comments top 27
1
boramalper 2 days ago 11 replies      
When I come across magazines from the past such as this, I keep wondering why and when did we stop writing such beautifully crafted technical articles for the masses and instead turned to advertisement-like pieces on consumer electronics. Look how empowering those articles were by treating you as a creative being, and how passivizing the current articles are in encouraging perpetual consumption.
2
HankB99 2 days ago 2 replies      
Byte was my favorite magazine of all time - even better than Computer Shopper. ;)

I remember the last page article - Stop Bit. One particularly memorable one described how various professionals would search for an elephant. Some I recall are.

- A C programmer would start at the southernmost point in Africa and travel east until they got to the ocean and then move north and head west to the opposite shore, repeating until they had covered all of Africa. An assembler programmer would follow the same strategy but do it on their hands and knees.

- A college professor would prove the existence of an elephant and leave it as an exercise for the students to actually find one.

- A marketing executive would paint a rabbit gray and call it a desktop elephant.

I wonder if I could find that article. I'll have to see if Archive.org is searchable. Or maybe I can find it by searching today: https://www-users.cs.york.ac.uk/susan/joke/elephant.htm :D

3
dvfjsdhgfv 2 days ago 3 replies      
The "You can do surprising things when you have 64 kilobytes of fast RAM" ad made me realize how little we appreciate the abundant resources we are lucky to have these days...
4
keithnz 3 days ago 4 replies      
yeah, lisp is cool, but I'd need a computer to run it on... I'm seriously considering one of those 8070 Series I Business System..... it has dual floppies, 591K bytes of storage a 19" color display, a 60 cps impact matrix computer, and!! they say at twice the price it would be a bargin, so at $7000 it seems the way to go
5
KC8ZKF 2 days ago 1 reply      
In "About the Cover" on page 4, the editor invites the reader to examine the monolith and "identify the textbook from which these S-expression fragments were taken, and the purpose of the program."

Anybody have a clue?

6
magoghm 2 days ago 0 replies      
I had a subscription to BYTE magazine. That issue was how I discovered there was this amazing language called LISP.
7
cmic 2 days ago 0 replies      
My first issue was in 1984 (Forth Issue). I couldn't live without it, then. Until the end of Byte. We had no equivalent source of info in France. It was a fantastic and eclectic source of programming hints, ideas, whatever. I'm now 66 and retired as a Sysadmin. Very good memories.--cmic
8
lispm 2 days ago 1 reply      
There is another BYTE Lisp issue: February 1988 Vol 13 No 2
9
dr_ick 2 days ago 2 replies      
265MB PDF!!!

I don't think this thing is 1979 compatible.

10
huffer 3 days ago 1 reply      
Wow, great! This magazine is precisely as old as I am. I wasn't born with a lisp, just a fondness for it (and now I know why).
11
tolgahanuzun 2 days ago 1 reply      
Only half of the book is advertising. But I can not deny that it is interesting. Cool
12
tannhaeuser 2 days ago 4 replies      
I've got a collection of 1988-90 AI Expert magazines on LISP, Prolog & Co. with interesting design features such as lino-cut-style artwork for (periodical) columns and a special Elite condensed type face for code. Does anybody know if it's ok to put these on a web site with credit when I can't get in touch with the original authors and artists (does archive.org have or need special permission from BYTE)?

Btw, I'd love links to 1986-1996ish articles on SGML and markup technologies.

13
Gargoyle 3 days ago 0 replies      
Also featuring A Preview Of The Motorola 68000!
14
daly 1 day ago 0 replies      
I wrote an article on hobby robots (I was working for Unimation at the time, the company that invented robots). Sadly I can't seem to find the issue online.
15
pinewurst 2 days ago 1 reply      
16
wiz21c 2 days ago 1 reply      
I know it's totally local to french speakers, but does anyone remember Hebdogiciel or Pom's ? Both were great. The former was crazy and had lots of code in it and a very "free speech" nature (think Charlie Hebdo but for computers) and the latter was all about Apple (2, 2+, 2e, 2c; not the i-thing you're looking for)
17
hultner 2 days ago 2 replies      
What an amazing cover, anyone know if it's available as a poster or standalone graphics?
18
pagl309 2 days ago 2 replies      
Would be interested to know if these articles are worth reading to learn about the language; i.e. ~40 years later, has the language changed too much to make the content here useful for learning purposes?
19
delegate 2 days ago 0 replies      
I haven't seen all ads in the magazine, but I notice that most (all?) of the companies are no longer around. Except one.The ad had a bit of a prophecy in it too:"You can't outgrow Apple."
20
eulevik 3 days ago 0 replies      
Great stuff, good to see again
21
KirinDave 2 days ago 1 reply      
This is so, damn, fantastic.

I wish I could say I'm nostalgic for it, but it predates my existence. What's it called when you yearn for the style and typography shortly before you were born?

22
eleitl 2 days ago 0 replies      
I kept raiding the local library archives of the local army university for such back issues in the 1980s and 1990s. Great technical content.
23
ngvrnd 2 days ago 0 replies      
I believe I still have a copy of that issue. Does anyone remember "M-Lisp" at all?
24
emmelaich 2 days ago 0 replies      
David Betz's Xlisp first appeared in BYTE I believe. It's still around.
25
s369610 2 days ago 0 replies      
page 66 discusses the "Model of the Brain" called CMAC (Cerebellar Model Arithmetic Computer). Now I have to read up on what became of that model.
26
VonGuard 3 days ago 2 replies      
((((((((((((((((((((((((((((((((((((((((Neat)))))))))))))))))))))))))))))))))))

Shit, it's not working...

27
idibidiart 2 days ago 6 replies      
Lisp is the only high-level programming language that has no syntax. In Lisp, s-expressions are used to encode both form (data structures) and function (algorithms) of computer programs. Since code and data are seen for what they are (two sides of the same coin) the distraction of a real PL syntax is eliminated, and the programmer is able to think more coherently.
27
Google's stance on neo-Nazis 'dangerous', says EFF bbc.co.uk
251 points by dberhane  1 day ago   359 comments top 8
1
sctb 1 day ago 0 replies      
2
corobo 1 day ago 22 replies      
While I definitely don't support the people they're booting off I do have to agree with the EFF here.

For example, "And music streaming services offered by Google, Deezer and Spotify have said they would remove music that incites violence, hatred or racism."

Now these services have put it out there as policy someone has to define what's violent, hateful or racist in music. Racism? Ok nobody's really going to bat an eye at that disappearing.

Violence and hatred though? As an off again on again heavy metal listener.. almost literally every track could be described as violent or hateful. That's the genre. The same could be said for other genres and their sub-genres. Rap comes to mind. Is Eminem next on the chopping board?

Music was the easy example, there's other examples available for the other services (registrar, DNS, hosting, CDN) as to why making this policy is a bad idea. Now anyone needs to do is convince someone at the corresponding target that a site is similar enough that it should be taken down.

South Park had a two-parter that addressed this exact problem [1][2]

[1] https://en.wikipedia.org/wiki/Cartoon_Wars_Part_I [2] https://en.wikipedia.org/wiki/Cartoon_Wars_Part_II

3
tgb 1 day ago 3 replies      
I think the hypothetical that people like me on the left need to consider is the following. Our current vice president is extremely anti-abortion. It's no stretch of the imagination to see that portion of the country growing in strength to be there dominant view in power within ten years. In their view abortion doctors are literal baby killers, websites arguing the benefits of abortion are literally advocating the killing of babies. In their eyes, this is literally as bad as Hitler. If you set the standard at "ban everything that the populace deems to be as bad as Hitler" then today we get rid of Nazi sites and tell KKK members they can't use our gyms, but tomorrow who will be condemned? (Note that this isn't even a slippery slope argument: it's saying that who gets to define the slope changes.)

The other argument is that if Google and co have never ever bowed to political pressure to remove something except as required by law, that gives them a great argument to push back against some of the less progressive governments which they must work with. If Assad starts demanding that internet companies in Syria ban his political opponents, then Google could reply "we didn't even ban Nazis, why the hell would you expect us to ban anyone for you?"

And in case this all seems hypothetical, remember that the current US government recently requested all visitor logs for an anti-Trump website.

4
bedhead 1 day ago 6 replies      
It's increasingly uncomfortable to realize that a handful of tech companies are in many ways more powerful than the government. I don't like the direction anything is headed in.
5
meri_dian 1 day ago 2 replies      
This is how extremism spreads:

1. A Reasonable Position is expressed, in this case - 'Nazi's are very bad'. The Reasonable Position often involves an Enemy that must be stopped. Most reasonable people will agree with the Reasonable Position.

2. The Reasonable Position becomes the overriding factor in any situation that involves it. All other factors and considerations are dwarfed by it and forgotten.

3. Because the Reasonable Position comes to dominate the thinking of the Extremist - who often means well - they come to believe one can only ever be for or against the Reasonable Position. There is no room for moderate positions that try to balance the Reasonable Position with other important considerations and values - in this case, freedom of speech.

4. In order to show support for the Reasonable Position, third parties are forced to action in accordance with the world view of the Extremist. If they try to balance other considerations against the Reasonable Position, they are seen by the Extremist as sympathizing with the Enemy.

5. The fervor of extremism charges through society, trampling on other values and considerations.

6
apatters 1 day ago 0 replies      
It seems to me that the right to exclude certain types of speech from your privately owned platform is in itself a form of expression, and important to preserve. Where we get into trouble is when one entity obtains monopoly or near-monopoly control over a means of spreading information, and thus gains the power to tell everyone what they can and can't know.

And Google is not that far off. They have a monopoly in at least one market and the EU has already found them guilty of anti-competitive practices. The US government has not brought an anti-trust case against Google, and you could argue it's failing to do its job--the ties between Google and the US government run disturbingly deep, with Google allegedly serving as an arm of US foreign policy in many ways: https://wikileaks.org/google-is-not-what-it-seems/

Either way, the most important point is simply that monopolies are dangerous. And the best solution is to weaken them, whether through regulatory action, consumers voting with their feet, or other companies introducing competition. I think the most interesting project in this space is Searx, which allows me to aggregate results from Google and other search engines, and flip a switch to turn each engine on or off. Searx is a great step in the direction of breaking Google's monopoly and thus hindering its ability to severely limit free speech. https://github.com/asciimoo/searx

7
undersuit 1 day ago 2 replies      
I don't think private-sector companies have any obligation to host anything. The problem is we in the tech community have watched with only minor concerns as the web grew increasingly centralized and left the power to these companies. The Daily Stormer has no right to an domain name or search results or ad revenue, no site does. The Daily Stormer has every right to exist, but it doesn't have a right to be served fast and conveniently(no, I'm not advocating against net neutrality, any host for the Daily Stormer should treat it exactly as they treat all their other customers). I think despicable sites like the Daily Stormer have a right to exist, but I'd rather they be hosted on a personal computer with a non-static address and every now and then the dial up connection get's interrupted when the site admin has to call David Duke about when the next Klan rally is.
8
nxsynonym 1 day ago 13 replies      
>"Because internet intermediaries, especially those with few competitors, control so much online speech, the consequences of their decisions have far-reaching impacts on speech around the world."

Maybe it's just me but I think enabling hate-speech and bigotry is much worse than failing to maintain 100% neutrality.

There's nothing stopping these maniacs from starting their own intermediaries to host the content(trash) they want to peddle.

28
DOJ Demands Files on Anti-Trump Activists, and DreamHost Resists npr.org
193 points by stefmonge  3 days ago   136 comments top 10
2
wmil 3 days ago 1 reply      
DreamHost is probably going to lose. The warrant is actually quite standard.

Basically the DOJ is just grabbing everything so they can filter through the comments and logs at their convenience.

https://www.washingtonpost.com/news/volokh-conspiracy/wp/201...

3
tehlike 3 days ago 2 replies      
I wonder if eff will support them.

Probably semi annual reminder time to consider donating to eff

4
whipoodle 3 days ago 2 replies      
Somehow I doubt all our stalwart free-speech proponents from the other week will come out in force for this issue.
5
mankash666 3 days ago 10 replies      
6
pmarreck 3 days ago 1 reply      
Good for them.
7
marcoperaza 3 days ago 3 replies      
They're seeking the identities of the rioters on January 20th. Over 200 people were arrested but many more have yet to face the consequences for what they did.
8
dsfyu404ed 3 days ago 4 replies      
Why is a political organization retaining any info that isn't the minimum to do their job?

It's not like both sides haven't been trying to get their enemies membership lists for the past 100+yr.

9
sergiotapia 3 days ago 2 replies      
The alt-left are dangerous but Dreamhost should resist until all legal channels are exhausted. Checks and balances!
10
generic_user 3 days ago 0 replies      
The lesson to take away from this regardless of your political leanings is that whatever power you give the state to go after a group you may disagree with will always, eventually be used against you.

It makes no difference if the target groups are alleged terrorists, alt-right, alt-left or or any other group. When you expand the power of the state that diminishes the privacy rights of the citizen that law applies to you, your family and your friends also.

29
Show HN: Product Graveyard Commemorating the most memorable dead products productgraveyard.com
245 points by ndduong  2 days ago   124 comments top 30
1
ndduong 2 days ago 12 replies      
Hi, I'm the creator of Product Graveyard, a fun way to keep track of and commemorate our favorite products that are with us no more.

I worked on this as a side project during my summer internship at Siftery. For building the site, I used a bootstrap grid for front-end structure and node.js to help with filtering and inserting the data.

Please join in by contributing a funny story or eulogy for one of the featured products.

2
mmanfrin 2 days ago 6 replies      
I'm still sore about Google Reader. I haven't found another reader that has quite found the right UX to replace it.
3
onion2k 1 day ago 0 replies      
What I find really interesting about lists like this one is that many of the entries are really great ideas that only failed due to poor timing or bad luck or a single error. The fact that someone failed to build something huge the first time around is not evidence that copying one of the entries wouldn't work now. It's just really hard to know which idea might work if it launched today instead of two years ago.
4
tradersam 2 days ago 2 replies      
Funny story, Lync[1] still exists. Actually it was an update to Skype for Business, at least on our systems at work.

I'm using it right this minute: http://imgur.com/a/qQ648

http://productgraveyard.com/products/lync.html

5
bicx 2 days ago 3 replies      
It's sobering to look at all these products and think about how developers poured thousands of hours into something that no longer exists.
6
mfrommil 2 days ago 3 replies      
Missing one of my all-time favorite dead products: Google Wave
7
wingerlang 2 days ago 1 reply      
If you had a newsletter like "new dead website of the month" if something died, I'd sign up.

I also would appreciate a gallery of screenshot for each product to get a feel for what it was.

8
AndrewKemendo 2 days ago 2 replies      
I feel the worst for Meerkat. They basically had a few weeks between blowing up huge at SXSW and then getting effectively shut down by Twitter with the launch of Periscope.

No justice in this world.

9
CM30 15 hours ago 0 replies      
Congrats on making such a neat site! It's quite interesting to see all the dead products and services that (often) never quite achieved their full potential.

That said, one thing does bother me here, and I'm not sure whether it's a mistake or not.

Basically, the all products lists don't seem to link to the individual pages for the closed products. In most cases that's likely fine (since I doubt you have separate pages for every single product listed), but it would be convenient to have them link to the product's page for more details when they're available.

http://productgraveyard.com/see-all-products-a.html

Other than that, it looks pretty good.

10
akeruu 2 days ago 2 replies      
I really like the tone and the realization.

Just a small thing that bother me is that on my desktop machine, the second column is not aligned as neatly as the others (due to two lines descriptions maybe ?)

11
ghostly_s 1 day ago 2 replies      
I love everything about this except the name. To laypeople, "product" != "software product", and it's revealing your bias. Why don't you just call it Software Graveyard?
12
dspillett 1 day ago 0 replies      
If you are going that far back how about including the original LapLink (and intersvr in MSDOS6 that implemented similar features). Pushing files over null-modem or the parallel port equivalent for extra speed was a godsend back when proper networking was relatively rare at home (or in small offices) so floppy-net was the main alternative.

The company still exists (was "Travelling Software, now renamed to Laplink Software) but obviously that specific product is pretty meaningless in today's environment unless you are playing with museum-piece hardware for nostalgia/shits/giggles.

13
franciscop 2 days ago 1 reply      
From the feedback here in HN it seems clear that you need a "suggest product" button. Maybe it could even be a Disqus on the bottom on the home page, which would automatically give you up/down votes functionality (;
14
arscan 2 days ago 2 replies      
Great job, this is a fun concept that is well executed.

I was going to add Geocities, but I was surprised to find out that it is still available in Japan. Anyone have insight as to why Yahoo kept it alive in that market?

15
hasselstrom1 2 days ago 0 replies      
Upvoted on PH as well - You did a great job mate. Well done on the UI and the concept.
16
snth12oentoe0 1 day ago 1 reply      
Love the site! However, it looks like you have some apps listed on the main page, but not in the list of all apps. For example, I submitted Aperture because I couldn't find it in the "A" section of the list of all apps. But I see that it is actually there on the main page near the bottom.
17
srcmap 1 day ago 0 replies      
I love google desktop search (RIP 2011.)

Last year I found a windows version of it online and found it still usable even in windows 10. Very unsafe I know - I did use ProcExploer+VirusTotal to check its binary signatures on 60+ scanner sites.

It still much better/faster than the native Win10 Cortana search.

Love know if there any open source clone of it?

18
daxfohl 2 days ago 0 replies      
Huh, I knew I had a zombie bitcasa account that I assumed I'd been paying for but was too lazy to cancel, and was surprised to see it on your list! I think there's a market for a product that individually curates a person's miscellaneous accounts (say it watches your bank/credit card accounts or whatever) and alerts them when fees increase or the company goes bankrupt or it looks like a zombie account (and maybe offer to close it for $10 ($30 for comcast)).
19
roryisok 1 day ago 1 reply      
Great site, brings back memories. A few little issues I found

1. On mobile I have to scroll past all the featured products to get to "all products". A link at the top or a hamburger menu would be great!

2. No search?

3. "all products" doesn't appear to include "featured products"? For instance Picasa and Google reader are in featured but not in all.

Other than that its a lovely design and a good concept. Well done.

20
SippinLean 1 day ago 0 replies      
>Fireworks was not a unique child. It was not different from Photoshop or Illustrator so Adobe shut it down.

That's not true, it was notably different than the two, it was replaced by Adobe XD.

21
protomyth 2 days ago 0 replies      
I still miss Lotus Improv (I think it not on the list).
22
warrenm 2 days ago 2 replies      
Code Warrior
23
tolgahanuzun 1 day ago 0 replies      
Wow, I remember the times I used LimeWire. It was a nice service with the alternatives offered.
24
leoharsha2 1 day ago 1 reply      
They should've added Orkut. Met my first girlfriend in that platform.
25
unixhero 1 day ago 1 reply      
No submit button?

Ok here then:

Foldershare

Great peer to peer file sync tool.

Acquired by Microsoft and shut down. Slowly shaking head

26
quickthrower2 1 day ago 0 replies      
Mtgox? Liberty Reserve?
27
prabhasp 1 day ago 1 reply      
Google wave!
28
warrenm 2 days ago 0 replies      
Microsoft Bob
29
paxy 2 days ago 2 replies      
Bit premature to list Flash..
30
warrenm 2 days ago 1 reply      
BeOS
30
Introducing WAL-G: Faster Disaster Recovery for Postgres citusdata.com
204 points by craigkerstiens  1 day ago   58 comments top 10
1
drob 1 day ago 2 replies      
This is great. Can't wait to be using it.

We've been using WAL-E for years and this looks like a big improvement. The steady, high throughput is a big deal our prod base backups take 36 hours to restore, so if the recovery speed improvements are as advertised, that's a big win. In the kind of situation in which we'd be using these, the difference between 9 hours and 36 hours is major.

Also, the quality of life improvements are great. Despite deploying WAL-E for years, we _still_ have problems with python, pip, dependencies, etc, so the switch to go is a welcome one. The backup_label issue has bitten us a half dozen times, and every time it's very scary for whoever is on-call. (The right thing to do is to rm a file in the database's main folder, so it's appropriately terrifying.) So switching to the new non-exclusive backups will also be great.

We're on 9.5 at the moment but will be upgrading to 10 after it comes out. Looking forward to testing this out. Awesome work!

2
kafkes 1 day ago 9 replies      
Hello everyone, I'm the primary author for WAL-G and would be happy to answer any questions.
3
sehrope 1 day ago 2 replies      
I've used WAL-E (the predecessor of this) for backing up Postgres's DB for years and it's been a very pleasant experience. From what I've read so far this looks like it's superior in every way. Lower resource usage, faster operation, and the switch to Go for WAL-G (v.s. Python for WAL-E) means no more mucking with Python versions either.

Great job to everybody that's working on this. I'm looking forward to trying it out.

4
upbeatlinux 1 day ago 1 reply      
Wow, great work! I am definitely going to test this out over the weekend. However AFAICT the `aws.Config` approach breaks certain backwards compatibility w/how wal-e handles credentials. Also wal-g does not currently support encryption. FWIW, I would love to simply drop-in wal-g without having to make any configuration changes.
5
jfkw 1 day ago 1 reply      
Will WAL-G eventually support the same archive targets as WAL-E (S3 and work-alikes, Azure Blob Store, Google Storage, Swift, File System)?
6
craigkerstiens 1 day ago 0 replies      
For those interested in the repo directly to give it a try you can find it here: https://github.com/wal-g/wal-g
7
gigatexal 20 hours ago 0 replies      
I wonder where python will end up in the next five or so years if Go is continually chosen for concurrent or high perf code things like this.
8
jarym 1 day ago 1 reply      
"WAL-E compresses using lzop as a separate process, as well as the command cat to prevent disk I/O from blocking."

Good to see people sticking to the unix philosophy of doing one thing well and delegating other concerns - cat and lzop are both fine choices!

9
mephitix 1 day ago 0 replies      
Fantastic intern project, and fantastic work by the intern!
10
X86BSD 8 hours ago 0 replies      
Why would this be a better option than a simple zfs snapshot, zfs send/recv backup and recovery strategy?
       cached 20 August 2017 04:11:01 GMT