hacker news with inline top comments    .. more ..    18 Oct 2015 News
home   ask   best   3 years ago   
Siberite: A Simple LevelDB-Backed Message Queue in Go github.com
24 points by Bogdanovich  1 hour ago   2 comments top 2
1
krat0sprakhar 30 minutes ago 0 replies      
This couldn't have come at a better time - I was actually looking for a durable message-queue written in Go. Is there any way to read more about the architecture of this system? I find systems like these to be quite fascinating but taking the time to go through the code can sometimes be very time-consuming. It would be awesome if more projects have a writeup as detailed as cockroachdb[0]!

Aside: There used to be a site sometime back which used to distribute compiled binaries of Go code for all platforms? Is it still up any chance?

[0] - https://github.com/cockroachdb/cockroach#architecture

2
xrstf 53 minutes ago 0 replies      
Sounds interesting. For my usecases, which require few (< 10) messages/sec and no clustering, would I gain anything by using Siberite over Beanstalk?
Prison Without Punishment themarshallproject.org
21 points by hecubus  1 hour ago   25 comments top 6
1
linuxhansl 35 minutes ago 1 reply      
German here, living in the US (and liking it here in the US).

I can confirm that generally there is a different viewpoint. In Germany it is somewhat of the last resort mostly to be avoided, and even then the aspect of rehabilitation (and deterrence) is most important.

In the US I find there's often a notion of revenge, as in "This person must suffer for what for (s)he did!" And more severe sentences usually to be considered as "more justice".

2
jit_hacker 1 hour ago 2 replies      
There are no absolutes when it comes to criminal punishment. There is always an exception to everything.

That said, I've long thought if you stop treating people like animals, they'll stop acting like it. I've never studied prisons, or psychology, or anything remotely related. But I genuinely believe this philosophy is investigating.

3
d357r0y3r 37 minutes ago 1 reply      
Could it work? Of course, the data is abundantly clear here. If you treat prisoners like shit, then their chances of recidivism are much higher. If you want to reduce crime, then you have to treat prisoners well. There's really very little controversy here from an academic perspective.

The barrier is cultural. The public in the U.S., by and large, expects prisoners to be punished harshly. Retribution and deterrence rank way higher than rehabilitation in terms of the desired outcome of incarceration. As far as the average American is concerned, anything that happens inside of the prison walls, including but not limited to rape, murder, torture, is the price you pay for breaking the law.

If we want to have better prisons, then we'll need people to develop some degree of empathy for prisoners, and that's a tough battle in the U.S.

4
clamprecht 22 minutes ago 0 replies      
In prison (US federal), when a guard was a dick to inmates, our saying was "We're here AS punishment, not FOR punishment".
5
scrapcode 40 minutes ago 1 reply      
In the Federal system inmates work towards something similar through their behavior. Look up Federal Prison Camps.
6
rokhayakebe 51 minutes ago 3 replies      
Note that in the US prison is a multibillion dollar industry. It cost around $30,000 (on the low end) to incarcerate someone yearly. Even if they spend a day or two, someone is making money. Nothing to be said of the calling minutes. All in all this may be a $50-$100B industry in the US.

That is not going anyway.

U.S. Will Require Drones to Be Registered nbcnews.com
124 points by davidbarker  6 hours ago   89 comments top 14
1
neurotech1 4 hours ago 4 replies      
One thing about the RPV/Quadcopter debate that is rarely mentioned is the reason why they don't use ADS-B transponders.

The FAA requires ADS-B trasponders to have high accuracy GPS, and that pushes the cost to over $2,000 per device. It would be logical for the FAA to relax the GPS requirement slightly, so a cheap GPS module is sufficient to alert nearby aircraft of RPV activity over a certain altitude (eg. 200ft AGL) These RPV-grade ADS-B transponders could use a limited signal output, to avoid nuisance pop-ups from longer distances. The transponder Mode-S ID uniquely identifies the RPV.

It would be possible for a transponder to use an alternate channel frequency, similar to how many General Aviation aircraft use 978Mhz ADS-B. Even with an alternate RPV channel, the RPV operators would still be alerted to regular aircraft operations.

2
Malstrond 5 hours ago 3 replies      
I'm wondering how this will be implemented.

If used recreationally, they're R/C flying models, no matter how much the DJI marketing dept likes the term.So will the the hobby that has been fine for decades be hit too?

Will you have to register a Cheerson CX-Stars that weighs 8g?

What about the congressional Special Rule for Model Aircraft? "Technically it's not the FAA so we can regulate everything!" or what?

They're also not that hard to build yourself and the components are shared with other things (e.g. motors are shared with R/C cars and the first popular flight controller was an Arduino with the sensors from a Wii Motion Plus or Nunchuk) so you can't ban the part sales.

3
saulrh 5 hours ago 1 reply      
And now we hope that their definition of "drone" doesn't include traditional RC aircraft, ten-dollar indoor micro-quadcopters, paper airplanes, RC cars, model rockets, or children's balloons.

Edit: oh, and that it does include commercial airliners under autopilot, military aircraft under autopilot, NASA-scale rocket launches, and military guided missiles and cruise missiles.

4
MichaelApproved 5 hours ago 2 replies      
I love the progress drones have given our society. From surveying property to beautiful aerial views in a video to cool view of your kids soccer game to just plain fun. Drones are amazing.

You can buy a great drone and get amazing shots over and over for less than it would cost to hire a helicopter once.

However, they also pose a risk. I'd favor a simple registration process and clear cut rules that are not infringing on the use of drones. I'd like to see the registration used mostly to enforce the rules and track a drone back to its owner.

Ideally, drones should also broadcast its ID, so it's easy for other pilots to know they're in the area and also allow LEO to ticket/fine for offences. Without a broadcasting ID, most of these rules will be hard to enforce.

5
phren0logy 5 hours ago 0 replies      
If I attach a gun to it, then can I avoid registering it?
6
narrator 2 hours ago 2 replies      
All the big consumer drone makers are in China. I love how all sorts of unregulated surprisingly powerful tech comes out of China. The last mischief they engaged in was way too powerful lasers. They just don't have the fear of science in that country like we have in America.
7
mindcrime 4 hours ago 3 replies      
This has to be one of the stupidest ideas I've ever heard in my life. Not because it doesn't possibly have some noble intent behind it, but - if for no other reason - because it's going to be bloody impossible to enforce. When any hobbyist can easily build a drone/UAV without much in the way of special skills or equipment, how in the hell do they ever expect they'll get everybody to register them?
8
canow 3 hours ago 1 reply      
They probably should require all birds to be geolocated as well... since they present about the same level of danger to real aircrafts..
9
jongraehl 1 hour ago 0 replies      
Sure, new-tech drones should respond to a targeted ping w/ serial number or similar. Simple challenge/response crypto could avoid cloning/spoofing. Enforcement of registration: no response = police will follow the drone to its owner and ticket (else impound). Physical serial number for legacy drones.
10
dham 2 hours ago 0 replies      
They just need to make these drones kit only and harder to fly. I've been flying RC with my dad since the early 90s. You would spend a month building these gas powered planes or helis. Only way to learn to fly was through other people or crashing a lot. It def kept irresponsible people out of the hobby. Plus you have to have skill to fly a 6 channel collective pitch heli. These things are not toys.
11
canow 2 hours ago 1 reply      
Drones are army planes that can fly for hours and probably weight hundreds if not thousands of pounds... Quadcopters are toys that can fly for minutes and weight only a few pounds...
12
joshmn 5 hours ago 0 replies      
I won't be surprised if failing to be in compliance while flying a UAV/RPA results in being arrested.
13
gesman 2 hours ago 0 replies      
I just found a tiny little drone on my backyard.

Poor little thing crashed and now it's mine! :)

14
barefoot 5 hours ago 6 replies      
In case anyone is reading from NBC, would you mind using the term "UAV" or "RPA" over "drone"? When we throw around the word "drone" here on reddit and hacker news I feel like it's okay because many of us know the key differences between what we're flying and the hellfire raining completely terrifying semi-automated flying death machines deployed in war-zones around the world.

However, many people who read your articles might not be so well informed. Many people have an incorrect and confused understanding of them and to them the word "drone" carries that jumbled mess of an associations. It would be helpful to everyone involved to use more precise modern language. You may argue - as the ACLU has - that the word "drone" is the most direct way to talk about unmanned aerial vehicles to the broad public. It may be easy to use and direct, but it's far from accurate. As Edward Murrow famously said "We cannot make good news out of bad practice."

CS183C Session 8: Eric Schmidt medium.com
17 points by Titanous  4 hours ago   1 comment top
1
krat0sprakhar 3 minutes ago 0 replies      
> Q: Hiring?> A: One of our rules was, we dont want to hire your friends. Another rule was not to hire people from lesser universities. Another rule was to only hire people with good GPAs. It was frustrating, but it meant that we ended up with a lot of really smart people from great universities, and that served us well.

I wonder what led to Google changing their minds and not considering the GPA as a part of hiring decisions.

New Findings from War on Poverty: Just Give Cash bloombergview.com
96 points by joe5150  6 hours ago   75 comments top 13
1
scholia 3 hours ago 3 replies      
Did anybody actually read the story?

The key observation is that a study published in Nature "found a correlation between child brain structure and family income. Simply put, family income is correlated with childrens brain surface area, especially among poor children. More money, bigger-brained kids."

This is supported by the Cherokee study: when the families became a bit better off ($4,000 a year) the kids did better when they grew up.

So it's missing the point to argue about how grown-ups might (or might not) "waste" money and whether cash is better than other forms of welfare. The point is that bringing up children in poverty creates a worse outcome for society as a whole.

The fact is that $4,000 a year for 20 years is a very small amount compared with the cost of US police, courts, and prisons. If a poor kid grows up, gets a job and pays taxes, that's a massive win for society compared with the same kid growing up in the sort of deprivation that leads to a life of crime and jail.

2
asuffield 5 hours ago 6 replies      
There's more depth here than "just give cash".

What we've seen in study after study is that if you take low-income people who are struggling and give them more money, they use it in ways that improve their lives, they become more productive, and this has profound long-term impact.

At the same time, I don't think anybody really disputes that giving moderate amounts of extra cash to a heroin addict is unlikely to improve their lives. They need medical treatment before anything else.

I've never seen a study which looked at extra cash injections for long-term unemployed. That would be an interesting one. I would be unsurprised to find that cash alone was insufficient to solve their problems; the most obvious thing they need is education.

My point here is that cash clearly helps in some - probably most - circumstances. But "just give cash" is insufficient; we still need to work on all the other things as well.

3
rgovind 18 minutes ago 0 replies      
A related view expressed by India's central bank governor

http://www.bloomberg.com/news/articles/2014-08-11/rajan-urge...

"Cash would empower the poor to choose where to buy goods, providing an alternative to government-run monopolies and creating competition in the private sector"

4
zerebubuth 3 hours ago 0 replies      
Many of these studies appear to be variations on the form "We selected some people, gave them some money and told them we'd be back to check up on how they were doing." I wonder if there's a significant effect from the expectation that being part of the study will improve one's life. Perhaps receiving attention from authority figures (PhD or MD), and potentially their judgement, might alter behaviour in similar ways to the "honesty box" experiment [1].

It seems like there might also be some bias in selection - presumably one has to consent (in writing) to be part of a long-term study. Perhaps this encourages participants to think of the future and might influences decision-making away from short-term goals and towards "investment" uses of the money rather than ephemeral ones.

I'm looking forward to seeing the results of their differential experiment, and whether there's as much difference between the $333 and $20 groups as there is between the $20 group and the "$0 group" of the general population.

[1] http://rsbl.royalsocietypublishing.org/content/2/3/412

5
yummyfajitas 4 hours ago 8 replies      
There is unfortunately a large political danger with giving cash. If you give cash, then it becomes possible for pundits to agitate for more assistance on the theory that "you can't live on $X". When poor people are explicitly given a room, 3 square meals/day, and government issue poor-people sweats, it's pretty hard to argue that they are somehow lacking anything necessary to live.

Based on this article, there is also no reason to believe that cash assistance rather than in-kind assistance is necessary. The proposed mechanism is "Parents are happier because they have more money, leading to less fighting within the family. This lowers stress on kids..." But in-kind assistance would also lower stress since parents wouldn't need money.

In-kind assistance has the added benefit that parents can't divert public assistance intended for children's welfare into other goods (e.g. alcohol, tobacco, drugs, junk food).

6
rtehfm 4 hours ago 0 replies      
Money is a mechanism that, in a way, insulates us from a variety of problems that, when reduced, allows us to focus on other things. For instance, when you're not worried about where your next pay check is coming from, you can invest that energy elsewhere. Reading, continued education, practicing your craft, etc. When one is no longer burdened by the hunt, he's available to venture into other activities.
7
jluxenberg 4 hours ago 0 replies      
It seems that we can give money to those in need, and see a "return" on that money in terms of increased employment and generation of value. Sounds like an investment to me.

The government is the unique position; they are able to realize a gain on this via income tax revenue. Maybe this can be done via basic income? Could we pass a basic income bill as an "investment in America" and is there a model where the government actually makes a return on this investment?

8
littletimmy 5 hours ago 3 replies      
Isn't this exactly what Milton Friedman advocated for some 40 years ago? Why is this just being "discovered" now?
9
joe5150 6 hours ago 0 replies      
"We examine how a positive change in unearned household income affects childrens emotional and behavioral health and personality traits. Our results indicate that there are large beneficial effects of improved household financial wellbeing on childrens emotional and behavioral health and positive personality trait development...Parenting and relationships within the family appear to be an important mechanism. We also find evidence that a sub-sample of the population moves to census tracts with better income levels and educational attainment."

Signs and wonders....

10
afarrell 5 hours ago 3 replies      
I'm also curious if we could reduce the number of children born into poverty by offering to pay men $2,500 plus medical costs to get a vascectomy.
11
JorgeGT 5 hours ago 1 reply      
> giving poor families money, on top of the benefits they already receive, improves their childrens behavior

I don't think anyone questions that extra money does good. The big question in the fight against poverty is: given X available welfare dollars per family, what is the optimum allocation between giving them as benefits or giving them as hard cash?

12
unclebucknasty 2 hours ago 1 reply      
These studies seem to crop up now and again.

What's funny is that we have devised this completely contrived way of divvying up the world's resources, including this notion of private ownership over key natural resources. But, in truth, no one needs to go hungry, without shelter, water, etc. There is enough.

But, then, we step back and say, "what if we give these people, who currently cannot subsist under this scheme, some marginal share of the resources we've convinced them by fiat are someone else's to give them in the first place?"

Then, of course, we measure their outcomes within the context of the same scheme, and ponder other ways to help them.

Yet, the scheme itself is much more seldom questioned. That one person can earn bilions from what's pulled from the earth we all inhabit, while others die from lack of access to the same should be expected to create irreparable distortions in outcomes. But, it's somehow accepted as an unchangeable, almost natural premise, even as we search for solutions.

13
raykaye47 4 hours ago 0 replies      
Poor people are ALREADY given cash. Its called EBT and you see them squandering it on junk food and trading it for cigarettes.
The Uses of Orphans thenewinquiry.com
7 points by acsillag  1 hour ago   discuss
The Hostile Email Landscape liminality.xyz
367 points by jodyribton  8 hours ago   183 comments top 47
1
tacon 6 hours ago 6 replies      
I've managed my own mail server since 1993, and my email address has been the same that entire time. Here are some tips for maintaining sanity:

Greylisting still works amazingly well. With a long, long whitelist and greylisting plus DNSBL, I don't even bother running a spam filter, since the little bit of spam and emails from new senders ends up in its own directory as it came from a non-whitelisted sender.

Comcast finally started blocking residential mail server ports inbound a few years ago, so I had to migrate to a smarthost environment using a VPS as email server for $15/yr.[1]

Last year for a few months, Gmail was dropping everything I sent into the spam folder, even after recipients were marking it not spam. I eventually discovered the "Authentication-Results:" header that Gmail adds to every inbound message. It is under the "Show Original" dropdown menu. That showed that I "hadn't changed anything"(!) on my mail server, but suddenly Gmail was connecting to my mail server over an IPv6 interface, and I had never bothered to put the IPv6 block into the SPF record. Gmail was nice enough to explain exactly what it didn't like about those emails.

[1] http://lowendbox.com/blog/top-provider-poll-2014-q3-the-resu...

2
sjackso 7 hours ago 3 replies      
I've run into similar issues with a similar setup. It's frustrating. You can convince gmail user A to whitelist your messages, and so they'll get through to user A, but gmail user B probably still won't see messages from you unless you tell him to dig them out of the spam trap. And your messages to A might still be classified as spam if they have attachments or hyperlinks in them. (Even if you've been corresponding with A for several years!)

Once upon a time, to send email, you needed to use SMTP. Now, you must use SMTP, from a IP block that isn't categorized as residential, and which has never before had any association with outgoing spam, and you also must implement several ad-hoc identification protocols like SPF and reverse DNS. You should also use a domain name that you've owned for some time and which is not expiring soon. Every system to which you want to send mail will give different weights to all these signals. If they don't like you, their behavior is to report successful delivery and then silently hide messages from their intended recipients.

Spam is no fun, but the present situation is pretty weak too.

3
cm2187 8 hours ago 6 replies      
The problem is not so much the attitude of the big guys. It is that smtp is fundamentally broken. we need a better mail protocol that ensures:1. Traffic always encrypted and content always signed2. Guarantee that the sender is who it claims he is3. Decorrelating the email from the domain, a lot of users are prisoners of their current provider just because the address they gave everyone ends with the provider's domain name, very much like it is very hard to switch bank accounts4. Ability to provide disposable adresses which can be deleted when spammed

3 and 4 would require a sort of token system

4
dredmorbius 7 hours ago 0 replies      
Many of the issues we're running into with online systems, particularly those relating to quality and reputation (spam, collaborative filtering or content rating as on HN or Reddit, etc.) have strong analogs in real-world social spaces. And there were real-world mechanisms for dealing with these.

For a new businessman or professional setting out in the world, pre-Internet, "establishing your name" was a requirement. These days the concept's often referred to as "creating a personal brand", but the reality was pretty straightforward: how does an unknown quantity become a known quantity?

A common method was the professional or social introduction. This is still practiced, where a third-party _matchmaker_ will introduce two parties. The matchmaker usually knows both, and can vouch for the newcomer and smooth the path for introductions with the established party. Essentially, the matchmaker stakes _their_ reputation by speaking for another.

Lawrence Lessig describes a similar concept in his book Code and Other Laws of Cypberspace, in a passage describing a physical messaging system, the Yale Wall. This was a board onto which messages could be posted, with the proviso that that they be signed. Unsigned messages would be posted from time to time, effectively presenting an anonymous viewpoint. Removal wasn't instantaneous or automatic, but rather, at some point prior to garbage collection, another individual could review the piece, and, if they felt it warranted posting, sign for it. They weren't registering themselves as author, but as vouching for the merits of the viewpoint -- not necessarily agreement.

A messaging system in which a new peer might be able to indicate that, hey, peers X, Y, and Z, with established reputations can vouch for me, and for which those peers could confirm their endorsement, might address the "how to build reputation" issues of new mailservers.

I've also found that even large mail systems will frequently have some procedure for getting at least provisionally vetted, effectively a workfactor cost to coming online. Though the process absolutely could be improved.

5
eridius 6 hours ago 2 replies      
I feel like this is a solvable problem without making any changes to email whatsoever. The problem is the email recipient hosts are suspicious of the sender (as opposed to the message itself being suspicious). So the solution is to have a standardized way for senders to acquire an instantaneous reputation by tying their real-world identity to it (which lets them be held accountable if they do spam), and perhaps by throwing some money at it too. If there was some company that did identity checks, similar to how EV certificates are given out, then that ties your real-world identity to it (this could in fact be done by literally requiring an EV certificate for the hostname of the sender). This company could also take a decent-sized deposit (so you're staking money on not being a spammer) and hold it in trust for a set amount of time. Once the time has passed, and you've sent enough emails for recipients to draw meaningful conclusions, if you have in fact not spammed, then you get your deposit back (minus a service fee). Then all the big email hosts would pay this company to query it about senders the host doesn't already trust, and similarly they'd report any spam from these hosts back to the service.

Heck, this doesn't even have to be a new company. A big host like Google could just start offering this service anyway, as a way to simplify their own handling of unknown senders, although I'd feel more comfortable if this was done by someone else.

6
codecamper 7 hours ago 1 reply      
Here is an almost related story: I am in italy and was waiting for an email from the local Apple store to tell me my macbook's repair was complete. After 6 days waiting, I called the store. They said the computer had been ready for a few days.

I checked the spam folder. Gmail had plopped the apple email into the spam folder. Gmail's reason was that the email was in a foreign language, Italian. Didn't matter that the previous 2 weeks all my google.com searches were done from Italy.

7
waynecochran 7 hours ago 7 replies      
Create an email network where is would cost a penny to send email. It would be payed into bitcoin wallet of folks maintaining infrastructure.

Every email would be digitally signed and encrypted. Certificate with keys would connected to email address (and bitcoin wallet).

Spam would die.

Go build it please.

8
jasode 7 hours ago 1 reply      
>This isn't how the internet is supposed to work.

The email architecture was started back when it was a smaller network of researchers at universities, governments, etc. Everybody basically trusted each other.

Once the "internet" is available to the general public and commercial interests, it becomes vulnerable to the "bad actors" problem (e.g. spam abuse). That's why we have the inevitable situation today of a few entities (e.g. gmail, hotmail) being "trusted", and random residential SMTP servers run by homeowners being "untrusted".

I haven't seen a realistic de-centralized trust proposal for email. Even if a proposal is theoretically sound, what incentive is there for other big players to adopt it?

9
teddyh 6 hours ago 3 replies      
I sometimes see similar tales of woe, and I can only say that this does not match my experience. Ive done this many times, you set up the mail server, configure DNS correctly (including reverse lookup), and thats it. Never had problems being blacklisted or mail getting classified as spam.

I suspect that people having trouble are sending a lot of mail, like newletters, etc. But I cant prove this hypothesis.

10
hannob 1 hour ago 0 replies      
I have a bit of experience with running email servers. I can't really say that I had similar encounters.

In my experience if you get blocked by big mail providers it's almost always due to some reason. What's tricky is that it may be hard to tell what exactly is wrong, because they won't necessarily tell you (or not in an easy way).

Some advice what I'd do to try to find out what's going on:

1. Take a sent example mail that is like the blocked one (but obviously one that reached its target destination) with all headers and run it through spamassassin. Don't just look if it hit the spam score (then you did something terribly wrong), look at each individual rule that spamassassin hit. They might give you a clue. A proper mail usually shouldn't hit any or very few positive spamassassin rules.

2. Check your IP at a service like valli where you can query multiple DNS black lists. If it is on any blacklist try to find out how you can be delisted. There are some rogue blacklists that make it impossible to be delisted at all, you may ignore them (google for them, their behavior is well documented), but these shouldn't be more than 1 or 2. As already said by other commenters, don't forget IPv6.

3. Read whatever error message you can get your hands on. If you're blocked on the SMTP level read the error message. If your message got sorted into a spam folder look at all the headers. If the provider blocking you has some online docs about their spam filtering read that. If they have some sort of service for mail ISPs where you can sign up to get warnings sign up there.

Of course also the obvious stuff. If you do anything that is mass mailing you are in extra danger. Make sure that you allow people to unsubscribe easily, don't ignore manual attempts by them to unsubscribe ("I want to get off this mailing list") and delete invalid mail addresses.

11
jwatte 40 minutes ago 0 replies      
Charge the sender one cent per email through a combination of legislation and technology. The problem will instantly go away.The transfer of mailing lists to online forums will be a very small price to pay.
12
zrm 4 hours ago 1 reply      
I was thinking about this a while ago and have been meaning to write it up and post it somewhere, so I guess this is as good a time as any.

Hashcash (also known as the precursor to Bitcoin) was proposed to solve this problem in 1997:

https://en.wikipedia.org/wiki/Hashcash

The trouble with it is that it requires computation for each sent message, which is bad for senders with low resource devices or legitimate mailing lists. I want to propose a variant.

Instead of creating an expensive hash against the message, create an even more expensive hash against the (sender TLS certificate, receiver domain name) pair. This implies using TLS but it works just as well even if the certificate is self-signed. Then each mail server only has to generate a hash once per recipient domain, ever. Every message that mail server sends to that domain is tagged with that hash. A legitimate mail server will have already computed hashes for all the domains its users regularly correspond with and rarely if ever need to do any more expensive computations.

If spammers do the same thing then the receiving server can mark all messages sent with that hash as spam. So there is a highly disproportionate cost to spammers (even if they have more computing power) because to avoid that they have to continuously generate expensive new hashes. Which can be made arbitrarily expensive because legitimate servers only need to do it once. And a new hash is much less valuable to a spammer than a domain name or IP address is today because each hash can only be used against one recipient domain. The required amount of computation can be set by the receiving server so domains with more users can require more computation.

If a legitimate mail server is compromised by a spammer then it will have to generate all new hashes (because the spammer will presumably immediately ruin the reputation of the compromised ones), but the reputation of the legitimate sender's email domains is unharmed because the reputation is tied to the hash computation, not the sender's domain name(s).

And adding support to mail servers would require no configuration whatsoever. You install server version N+1 and it starts tagging outgoing messages with hashes that receiving servers can verify.

So here's the traditional form: http://craphound.com/spamsolutions.txt

How'd I do?

13
staunch 7 hours ago 2 replies      
This largely true and hugely disappointing. Our startup (https://portal.cloud) is making it possible for non-hackers to self-host their own email servers. It works very well for the most part, but we have had to explain to a number of them why their email sometimes gets bounced by the big proprietary cloud services.

All of our users have their own domain names, IP addresses, SPF records, and correctly configured (and up to date) Postfix SMTP servers. There is absolutely no excuse for not always delivering their email, and yet this is what the big companies do.

14
SHIT_TALKER 1 hour ago 0 replies      
I see lots of threads shitting on the guy for doing it wrong vis-a-vis his configuration whilst ignoring his actual problem: An IP address without a reputation score. I've had the same problem and reached the same conclusion. The address can't just be clean, as in not on a blacklist, but has to essentially already be whitelisted via a "known good" reputation score or mail automatically gets blackholed. How do I get my VPS provider of choice to give me an IP address with a good reputation score?
15
9248 6 hours ago 0 replies      
> This isn't how the internet is supposed to work. As we continue to consolidate on a few big mail services, it's only going to become more difficult to start new servers.

And this is exactly the reason I setup my own mail server. I'm only 1 man, but I hope more people will do so with time, thus requiring the "big ones" to work on better algorithms for filtering and not base it on reputation.

16
jimktrains2 8 hours ago 1 reply      
Email has been the my last hold out from switch away from gapps completely. I don't want to have to deal with any of this, especially as I do business communication with clients via it. Email wasn't suppose to be like this, and there has to be a better way to enable non-giants to successfully deliver email.
17
TazeTSchnitzel 1 hour ago 0 replies      
I use a private email provider (privateemail.com, via Namecheap), which doesn't seem to stop others receiving my emails, but sites sometimes have trouble emailing me, oddly enough.

Though that might just be because I have a 6-character domain (ajf.me).

18
ofir_geller 8 hours ago 1 reply      
Did you try to acquire reputation by having real people with accounts in "known" services send emails to your server?
19
gwu78 6 hours ago 0 replies      
It may be infeasible to run a new SMTP-based mail service from "residential IP's" that can interact with the existing email empire, dominated by store and forward middlemen who expect to make money from the "free" email service they provide.

That empire amounts to a junk email delivery service and later a way to gather information about email users. The later purpose is probably why you want to run a new email service?

However it is certainly feasible to run a new SMTP-based email service from residential IP's that does NOT interact with the existing email empire. One with no middlemen. The sender's SMTP server talks directly to the recipient's SMTP server. You decide what port you want to use. There are thousands to choose from.

There are multiple ways to do this, but I rarely if ever see this option discussed. I suspect it's because like DNS most users are not comfortable configuring mail servers nor with NAT traversal.

If indeed the motivation for running your own mail service is because you do not want your mail stored on third part servers (whether in the sender's mail folders or the recipient's), then the ability to interact with the existing store and forward email providers seems a counterproductive requirement.

20
jwr 7 hours ago 0 replies      
This is indeed worrying. I've been running my own E-mail servers for the last 20 years or so and even though my problems weren't as severe, I did run into a few cases where the "big ones" were at least delaying my E-mail.

But this kind of problem invariably arises when we go from a fragmented Internet with lots of small hosts/providers to an Internet of several walled gardens, run by the big guys.

The real answer is to offer a reasonable self-hosting competitor to GMail.

21
karlshea 8 hours ago 0 replies      
Having run my own email server about a decade ago I can kind of understand why they are just rejecting things. Dealing with spam for even just a handful of domains was a nightmare.
22
calpaterson 7 hours ago 0 replies      
I have self-hosted my mailserver for a long time and started to get problems a couple of years ago. The main issue is corporate networks running McAfee's "MxLogic" product that claim to bounce my mail and tell me so, but then go on to deliver it almost all the time.

The difficulty in getting feedback is extremely frustrating, particularly compared with getting feedback from, eg, google's webcrawlers.

23
andrew-lucker 8 hours ago 0 replies      
It's only a matter of time before we start seeing peering issues between major email providers. This is not a system for sharing, it is competition.
24
jmount 7 hours ago 0 replies      
It is simple: the more fear uncertainty and doubt the "big email providers" can cast on no using one of the big email providers the more they chase everyone into their business (when they can read it). You can go on and on how it is "technically hard to fix email" but that is a second order effect to not even trying.
25
drumdance 8 hours ago 1 reply      
Surprised he didn't mention third party reputation providers such at Return Path.
26
MortenK 5 hours ago 0 replies      
OP, I don't know if you are reading the comments here but in case you do: Don't get discouraged so quickly.

The reason this is happening is as the blurb from the MS postmaster help page: Your IP doesn't have a reputation yet.

The reason these rules are in place aren't about email monopoly, it's about spam. If anybody could setup a SMTP server and start firing off large amounts of mail, spam would be even more endemic than today.

You can configure your server perfectly, but that doesn't mean much, since it's your IP that's the problem.

If you have legit objectives, it's a pain in the ass for sure. But you are not the only one having this problem, and there's a solution for it.

All the big email service providers (ESP's) like Neolane, Exact Target, Mailchimp, Campaign monitor etc share this problem when they onboard a new client, who requires their own IP.

Deliverability is a surprisingly deep, technical topic, and all major ESP's have entire teams of specialists working on this.

If you want to make such a service as Fastmail, you need to get really into deliverability. It's not a walk in the park, but it's not impossible either.

I'm not a specialist in this particular area myself, so I can't give you that much specific advice. I've just worked elbow to elbow with a lot of these guys, so I know what kind of challenges they work with.

One thing I know for sure is really important, is the "warming up" of IP's. Basically the IP you are sending from needs to accumulate some reputation over a period of time, typically a month or two.

If you send out reasonably small amounts of mail to email addresses that exists and the recipients does not explicitly report you for junk mail, your IP get whitelisted and you will get a much higher delivery rate.

There's no quick fix unfortunately, and email reputation is hard to gain and fast to lose.

But it certainly can be done. You sound very competent on the server side of things, so to get your fastmail-like service up, I think it's just a matter of a bit more persistence and studying deliverability as a technical subject.

Hope this helps.

27
Animats 7 hours ago 0 replies      
My experience has been that sending from my server works fine if it's in DNS and RDNS. Sending in that server's name works fine if the SPF records are present. But I don't bulk send; everything I send is something I typed, other than some messages the server sends to me periodically.
28
grey-area 6 hours ago 2 replies      
Everyone seems to agree that email is broken (and yet incredibly useful and almost universal in reach).

So moving on from there, how do we fix it? Who is currently working on fixing it? What would a new protocol look like?

29
fensipens 5 hours ago 0 replies      
> this server was configured perfectly: (...) SPF, DKIM and DMARC policies in place

SPF, DKIM and DMARC are no indicators of spamminess of a source. These systems have a completely different purpose.

30
jfaucett 7 hours ago 0 replies      
You can talk about HTTPS and forcing encryption all over the web however much you want but as long as you're stuck with SMTP and the few large coorps as the only viable mail solutions you can forget individual citizen data privacy... at least thats my 2cents.
31
phantom_oracle 6 hours ago 1 reply      
The problem goes both ways.

Spam needs to stop and so too does the convergence of Internet services in general (not just email).

Frankly, the solution already exists, and just like how billion-dollar companies start with the tech community, techies need to embrace the idea of decentralization by adopting some trust-based model.

Frankly, I do not see why encrypted emails cannot be accepted by users through some peer-sharing agreement where the public-key is online.

There's tons of solutions that are easy for tech people and if this community embraces it, it trickles down, like how a lot of other things do in tech.

32
tete 6 hours ago 0 replies      
I think it's exaggerated. I am happily running a private mail server on a tiny vserver. My emails get through to whomever I might mail and I do so a lot.

I once heard that my email got flagged as spam.

Of course, if you don't set up your mail server correctly it might be that it's flagged as spam, but it's not really harder than setting up various other things correctly.

It sounds like it was set up correctly by the author. Most people use SpamAssassin, also big companies do. So it should be good.

Maybe the network itself wasn't considered to be good by the mentioned big companies.

33
z3t4 4 hours ago 1 reply      
I guess big providers has to deal with "newbies" all the time, that don't know how to configure their server and run open relays. And don't know how to add extra headers etc.

That said, silently dropping messages without a notification is probably illegal! And pretty serious! So if you know what you are doing (and you are not a spammer) you should send a cease and desist!

Just make sure you use double opt-in and that providing an e-mail address in sign-ups/etc is optional!

Some will however put your mail in a spam-folder and it's not much you can do about it, just hope your readers complain to their provider.

So basically: setup your smtp relay correctly! Make sure you are not on any black-lists. Add extra headers like dkmi / precedence, add spf (don't froget ipv6) and ptr. Add your relays to white-lists. Publicly publish a privacy and e-mail policy (important! with opt in and optional clause); Link to them and fill out some "email provider" forms at gmail/microsoft. Send out a bunch of test mails. This will take a whole day, but if you do this, you will have no problems unless your IP or domain is perma-banned.

34
grey-area 8 hours ago 2 replies      
Perhaps the problem here is that there is no verified identity for email servers?
35
k2enemy 4 hours ago 0 replies      
The anti-competitive consequences of this are really interesting. Does anyone know if there are historical statistics on email provider market shares? It would be interesting to see how things have changed over time.
36
motoboi 4 hours ago 0 replies      
McAfee Mail Gateway also uses a reputation, called GTI. Appliance hashes the message, looks up it on this online service and get back a score.

There is a KB explaining that new senders get a high score.

37
mgalka 6 hours ago 0 replies      
I suspect many of my messages are getting flagged as spam as well. Is there an easy way of checking whether emails are going through?
38
maerF0x0 6 hours ago 1 reply      
Nobody else has mentioned it, so i will; This is a great situation for the NSA. Get all 400M accounts in one fell swoop by using a gag ordered warrant on Google (or microsoft) Ez peasy. Much harder than to contact a sysadmin about their 1 account that isn't being fed into the behemoth..
39
atmosx 5 hours ago 0 replies      
I did not realize that I am running my private SMTPd (+ imaps) for nearly 8 years. I never had issues. My score on test-mail is 9/10 because of lack of DKIM. I will implement DKIM, see if I can get 10/10.
40
grinich 7 hours ago 0 replies      
I've looked into this a bunch as we've been building Nylas. Our backend is essentially a cloud mail user agent (MUA), but can have similar issues to a MTA or mailbox provider.

It turns out that creating a successful product in the email space requires you to build relationships and partnerships with the existing vendors/providers. It takes a lot of work, and is part of what you pay for when using Mailgun/Mandrill/etc.

These "greylists" and systems that drop mail from unknown IPs have been pretty carefully designed and tuned to combat the insane amount of spam out there. They work remarkably well, and for most of today's email users, spam is no longer an issue. (The current state of email abuse innovation is promotions/marketing/etc. which is a more subtle challenge.)

At the end of this article, the author writes, "This isn't how the internet is supposed to work." Ironically this system of obscure reputation-based email is a direct result of how the email system was actually designed to work, with a total lack of permissions or feedback loop. Many SMTP servers used to not even require a password.

Stuff like DKIM/SPF and DMARC is a step in the right direction. But the RFCs upon which our email system is based were written decades ago, and in many cases have fundamental flaws, like SMTP leaking metadata no matter what. I could go on and on about issues with email, and why I care about getting them fixed, but let's just say it was designed in a different era of Internet with different constraints and opportunities.

So how do you build a new email service and not get blocked? Well, you spend a few weeks or months emailing, calling, Skyping, and meeting with folks in the current space. You work your way up through marketing support and random protocol discussion lists until you are talking with the folks who can influence which IPs are blocked/unblocked. Then you convince them you are (1) building a legit venture, (2) are worthy of their trust, and (3) don't directly compete with them. Then you'll get a small number of clean IPs and you must not screw up!

There are a few hacks, like sometimes a single partner/vendor will sell you a block of clean IPs and help manage the spam reputation. But usually it's just a lot of sweat and annoying phone calls. It takes way, way longer than setting up SPF/DKIM. The challenge is more relationships than technical.

Once you have a new email sending provider, the burden shifts to you for doing abuse/spam prevention. And you find yourself implementing many of the strategies and systems you cursed when getting started. But that's the circle of rfc2822 life I guess.

Oh, and never use EC2 IPs for sending mail. Most of them have been burned by spammers.

41
gull 5 hours ago 0 replies      
It's a lost cause.

The blacklisting ugliness is one more sign email was badly designed. One more nail in the coffin.

42
bad_user 5 hours ago 0 replies      
The irony is that at least half the spam I get comes from @gmail.com addresses.
43
mike_hearn 8 hours ago 1 reply      
This guys story is a sad one but I'm sure we're missing some important information here. New mail servers get set up all the time, it's not impossible for such servers to be accepted by the big players.

If none of these major services ever learned that his server was OK, it's likely that users weren't unmarking the mail he sent as spam. And that leads to the question of why not.

44
lisa_henderson 6 hours ago 0 replies      
This is a true story:

I rent some servers from the Rackspace cloud for personal use. I have my own sites on these machines, and my own email servers.

Meanwhile, I have a day job, and lately it has been consuming 12 hours a day. We missed a deadline and we have all been working like crazy to catch up. I have fallen behind reading my personal email.

Roughly a month ago, my friends who use Gmail stopped getting my email. Or rather, they did not know I was sending them email, because all of my email to them was going to spam.

After a few weeks, I finally had a free weekend to catch up on my personal life, so I did some investigations. Turns Rackspace had switched over to IP6 in a way that impacted my email. I did not have a Sender Policy Framework for IP6, only IP4.

It's likely that Rackspace sent me an email about this, though I never read it because I was busy.

This was easy to fix: I added a SPF for IP6.

However, these kinds of issues do make it harder to maintain a personal email server. Its tough for us to keep up with the changes.

45
larrys 7 hours ago 0 replies      
Edit: They said they checked the spam lists but the ones that I've listed below may be helpful to others.

OP might have had an IP previously used by spammers or in a spam block. (See edit).

Important to check the IP before using it if it's been given to you by, say, the VPS Provider or the upstream. To see what, if any, reputation it has.

Here are two tools to use:

http://multirbl.valli.org/

http://mxtoolbox.com/SuperTool.aspx

And there are a bunch of similar tools.

46
Sir_Cmpwn 7 hours ago 0 replies      
Mail spammers are truly vile people. They have ruined it for everyone.
47
zer0defex 6 hours ago 0 replies      
Just goes to show reputation isn't the end all, be all, solution to everything. It's so often championed by the Linux kernel team as a reason why the distributed model works, and it's evident that in that case it certainly does. Here though, much outcry about how you can't setup a box and instantly be as respected as established boxes that have earned rep over time. This post just comes off way too butt hurt for my taste.
Metamorphosis and Millimeters themetricmaven.com
8 points by colinprince  2 hours ago   2 comments top 2
1
nraynaud 29 minutes ago 0 replies      
As a side note, the French (at least) architectural drawings are in centimeters, that's the only place I have seen centimeters as the default, but it's one of the most important engineering field.
2
powera 1 hour ago 0 replies      
It's almost as if the metric system was designed to be useful to ordinary people rather than to fit a nice-looking model of units.

Most people use a ruler or their hands to measure things, not dial calipers. And most people don't want to have to say "230 millimeters" as opposed to "23 centimeters" (or more realistically, 20 centimeters).

Show HN: Arguman Argument Analysis Platform arguman.org
93 points by fatiherikli  10 hours ago   26 comments top 15
1
arjie 6 hours ago 2 replies      
I've always wondered if there was a platform where we could have some standardized set of arguments regarding a proposition. Then, one would not need to tolerate repetitive pointless discussion: something that usually arises regarding propositions that require little expertise to discuss.

For instance, nothing would make me happier than to be able to reply to the nth discussion regarding the idea that "the GPL is freer than the BSD licence" with a universal fully-qualified link to every argument for and against the idea. Or "For software engineers, open-plan offices lead to greater productivity than individual offices".

While it may appear that this would lead to some sort of _Futurological Congress_-esque situation where we respond to people in paragraph numbers, it has many advantages:

* No longer will people be misled by a correct statement poorly argued for.

* No longer will message boards be polluted by the nth iteration of the same argument.

* Undiscovered lines of argument will be universally available.

Of course there's the disadvantage that you'll get less participation, and there's value in just having some number of comments even if they're repetitive: at the least, the desire to respond to that may bring people who later on make novel arguments.

This seems like a fine UI to do that. Deep link to the relevant sub-graph, and let the collective intelligence of thousands do your arguing for you. I like it.

2
platz 40 minutes ago 0 replies      
But.. however...because.. are too simple. There needs to be weighting. Why not use something like what law folks use to qualify arguments https://en.m.wikipedia.org/wiki/Stasis_(argumentation_theory...

Also look into Toulmin - claim, warranthttp://commfaculty.fullerton.edu/rgass/toulmin2.htm

3
vinchuco 5 hours ago 0 replies      
It is an interesting turn of events that Arguman can be used to improve the design of Arguman.

[1] http://en.arguman.org/there-should-be-inbuilt-definitions-fo...

4
grizzles 1 hour ago 0 replies      
I built something like this back in the day. It looks like it's lacking a weighing mechanism. This (https://en.wikipedia.org/wiki/Subjective_logic) is a good formalized framework to use for that, if the authors are here reading along.
5
graphql-tlc 5 hours ago 0 replies      
When will the arguments be formally-verified from sets of axioms rather than just human-verified (which is prone to error, politics and groupthink).
6
spoiler 6 hours ago 0 replies      
I love the ideea of this!

I have a question: When using because/but/however, do they apply to the hypothesis or to the premise? It would seem logical that they apply to the premise, however the count on the homepage is slightly misleading. I thought some people were "becausing" a lot to a subject, when in fact it counted the becauses on the "buts," too.

Also, the design could be improved, but it's usable as it is.

P.S: Gosh, my sentence is confusing.

7
arisAlexis 2 hours ago 2 replies      
Do perfectly rational highly intelligent agents argue or is it our imperfection that needs such a tool? If yes then can people really change their minds after discussing like this?

(I do think its a great platform)

8
fatiherikli 7 hours ago 1 reply      
Argument mapping is producing "boxes and arrows" diagrams of reasoning, especially complex arguments and debates. Argument mapping improves our ability to articulate, comprehend and communicate reasoning, thereby promoting critical thinking.

You can think argument mappings as visual hierarchy mappings.

Arguman.orgs aim is arguments to be mapped successfully by many users.

9
bobcostas55 2 hours ago 0 replies      
Any chance of releasing the source? I'd love to be able to self-host this...
10
joslin01 2 hours ago 0 replies      
There should be a wikipedia for these. Argupedia.
11
kovek 20 minutes ago 0 replies      
I love this!!! I wish it gets pushed forward! I wish a lot of people would use this! I think it is a great platform!

I worked on something very similar as one of my very first projects which got me into programming. I wanted there to be a debate website where anything could be debated using arguments. I've found that the debates I would see on TV or in everyday discussions would not be good enough, because:

- There was space for people to diverge off of the discussion

- When the discussion would fork, the participants might forget some previous arguments that were made

- It would be difficult to come back to a previous point.

- People would have a bias towards the arguments made by the most prestigious side of the sides discussion a certain matter.

- It was possible to make some claims without backing up proofs/sources.

- Emotions could become a factor. The discussion can heat up.

I thus wrote a small website where one could post an idea as a node, and others could reply in favor of, against the idea or under a neutral position. The users could also vote for some nodes. The website would then become a collection of trees. As I see it, it could be used to discuss any matters! However, I've never really pushed the idea forward.

I've always thought about picking the project back up as I was passionate about the idea. I've never really got around doing so (I would love to discuss on how to get projects pushed forward). Through the years, I thought about this website, and I've found some problems that could arise:

- There would have to be a good user base. My perception was that people would have less incentive to discuss where no one would listen.

- How do you simplify ideas as much as possible? Some texts can be summarized or shortened (and some connections like relationships to other nodes could be added) and still have the same idea. I'm guessing this would be done using moderation. I think this is somewhat relevant because if you're browsing a tree of ideas, you want to do so seamlessly such that you do not lose interest in providing your input.

- For some, it is tiring to undergo a proper debate where the claims made need to be backed up. A lot of people like to discuss freely, in a comfortable setting. The usual reply system works for that.

- I have found that many people like to stick with their beliefs more than with research. (This point applies to debates which need evidence. Many philosophical debates would be fine without the need for evidence.)

- If a node would get too big, it would contain more than one idea. There has to be a system to split nodes apart.

- How do you deal with merging nodes?

- How do you manage spam and moderate node creation? (I did not have a good understanding of how to achieve these)

- How do you deal with nodes that have been edited? I've found a way to deal with this, but it's not as pretty as I would have liked it.

- Watching websites like Reddit and Facebook, I realized the reply system was enough as it allowed people as much room as they needed to make their point, using text. The only issue is organizing the ideas properly in this case. Hacker News had the reply system and people were using it to lead great discussions.

I've also thought about extending relationships to not just logical relationships. The reason I was looking to do this was that I wanted to find the simplest and most elegant solution that could apply to many use cases (not ALL the use cases though). It fitted (and still somewhat does fit in) how I think about writing good software (please someone correct me if I am wrong). The relationships would be akin to: Grows from, Follows, Is of type, Contains, etc.

I thought that this would essentially grow into a database of everything, a little bit like Wikipedia. Although Wikipedia does not allow much discussion (As far as I know).

12
voaie 6 hours ago 1 reply      
When will different branches get merged? ;) It is great to build the mind map and log the history just like `git` does. But people still need rooms to discuss more.
14
ionforce 5 hours ago 0 replies      
I think this idea is a great in the nerd-good sense, as in I've thought of this myself many times...

But ultimately I wonder if it will be useful or really gain traction.

Maybe you can see it with really popular, hot questions, and get some social media buzz.

15
tengwar 5 hours ago 0 replies      
I've meant to do something like this for a long time.
Windows corrupting UDP datagrams geeky-boy.com
38 points by gkfasdfasdf  5 hours ago   6 comments top 2
1
detaro 4 hours ago 2 replies      
I honestly think the theory of the NIC messing up one of the offloaded tasks is a likely one, if I remember other strange errors that have happened in this context. To bad that he doesn't have access to the machine to do more thorough tests.
2
tiernano 5 hours ago 1 reply      
should this not be "Windows corrupting UDP Datagrams in some (possibly lower chance) cases"? The list of circumstances for the bug to occur (from the article):

1. UDP protocol. (Duh!) 2. Multicast sends. (Does not happen with unicast UDP.) 3. A process on the same machine must be joined to the same multicast group as being sent. 4. Window's IP MTU size set to smaller than the default 1500 (I tested 1300).

5. Sending datagrams large enough to require fragmentation by the reduced MTU, but still small enough not to require fragmentation with a 1500-byte MTU.

Data-Intensive Text Processing with MapReduce (2010) [pdf] umd.edu
7 points by sytelus  2 hours ago   discuss
The magic kernel johncostella.com
155 points by wazoox  13 hours ago   19 comments top 9
1
dognotdog 6 hours ago 2 replies      
When seen as a [1 3 3 1] (the 4th row of Pascal's Triangle) kernel, it is more easily revealed as a discrete Gaussian kernel, which is for example used in scale-space representations in image processing.
2
rer0tsaz 4 hours ago 0 replies      
Let me share my experience with chroma upsampling and smooth jpeg decoding.

At first, I optimized each channel, then upsampled the chroma channels using replication. This works terribly as you can see in the article.

So then I changed to linear interpolation. Briefly:

 +---+---+ a x b y c +---+---+
We know the values in a and c but not b. The distance from x to a is half a pixel and the distance from x to c is one and a half pixel. Then linear interpolation gives x = a + (c - a) * (1/2 - 0) / (2 - 0) = 3/4 a + 1/4 c.

This worked decently, but it still showed fringes around sharper edges. I considered using more complicated upsampling methods like Lanczos or Mitchell, but instead went with optimizing a full size image with constraints on the downsampled image. By avoiding upsampling I got my optimized high resolution image for each channel.

But there were still fringes! As it turns out, just because each channel was optimized seperately doesn't mean that the image as a whole is optimized. So I switched to optimizing the three YCbCr channels together, not looking at the differences abs(x_{i+1} - x_i) but looking at the differences sqrt((Y_{i+1} - Y_i)^2 + (Cb_{i+1} - Cb_i)^2 + (Cr_{i+1} - Cr_i)^2). This actually eliminated the fringes.

The final result is https://github.com/victorvde/jpeg2png

3
statusreport 6 hours ago 1 reply      
Sorry guys, but no magic is involved here: http://cbloomrants.blogspot.com/2011/03/03-24-11-image-filte...
4
pmjordan 9 hours ago 0 replies      
I'll need to try this as an OpenGL (ES) shader in an app I work on regularly. I've been meaning to replace the default bilinear filtering with something a bit nicer-looking for ages, and this seems like it fits the bill nicely, as it avoids the usual ringing and moir issues. (I don't need to worry about perspective correction as the app is 2D only)
5
wazoox 11 hours ago 0 replies      
My good friend Kyle posted his C++ port here: https://github.com/kylegranger/ImageScaling
6
krakensden 2 hours ago 0 replies      
What's the popular JPEG library that this is from? Why the refusal to mention it?
7
jevinskie 7 hours ago 2 replies      
This brings back high school memories of performing Gaussian blurring of 2-bit grayscale hand drawn sprites drawn on graphing paper. Yes, I got bored in class! I did use a calculator for assistance. :) I never knew there was such a remarkably simple kernel that provides good scaling!
8
torpet 8 hours ago 0 replies      
Finally all those haters who think low-pixel images cannot be zoomed in on are proven wrong... https://www.youtube.com/watch?v=I_8ZH1Ggjk0

Now someone only has to find a way interpolate reflections in sunglasses, then I can die a happy man.

9
detrino 6 hours ago 0 replies      
If you prefer blurry images such as the author seems to, you can use parameters to your cubic filter to add more blur than the (probably mitchell or catrom) parameters that gimp is using. I'd bet most people would find the result superior to this filter.
The Internet of Code begriffs.com
44 points by kushti  6 hours ago   1 comment top
1
joeyh 33 minutes ago 0 replies      
I remember reading about this earlier here:

http://www.haskellforall.com/2014/09/morte-intermediate-lang...

http://www.haskellforall.com/2015/05/the-internet-of-code.ht...

 This is just one piece of the puzzle in a long-term project of mine to build a typed and distributed intermediate language that we can use to share code across language boundaries. I want to give people the freedom to program in the language of their choice while still interoperating freely with other languages.
Super interesting ideas, great that it's progressing beyond that initial sketch.

A developers introduction to 3D animation and Blender oreilly.com
106 points by okfine  10 hours ago   54 comments top 4
1
zaphar 9 hours ago 5 replies      
Blender may be one of the most successful and best run Artistic/Media projects in Open Source.

It manages to pack a lot of power in a very small package. And the Open Movie/Open Game projects help to drive the direction of the Application with concrete goals. Personally I think it's one of the media production suites out there for the hobbyist. The power + price (free) can't be beat for the non-professional.

2
PixelMath 9 hours ago 2 replies      
I used to work for Hollywood visual effects. Over the span of many years I have tried almost all the software out there and I find Blender very powerful but my personal favorite is a software named Houdini* (not free but you do get apprentice version).The core methodology Houdini is built on is for creating procedural systems for everything which i think is of more relevance for this community.Do check it out, I am sure members of this community can put it to use for the things not even their creators would have imagined.

*http://www.sidefx.com/

3
yohann305 9 hours ago 2 replies      
Feel free to downvote me but the title is quite misleading as I was expecting a developer introduction to 3D to be heavier on code and show what's behind the hood.

For example, the tutorial starts by creating box, I'd love see the equivalent using code. Same goes for adding vertices, moving vertices.

Cheers!

4
shawnfratis2 2 hours ago 0 replies      
With all the complaints with Blenders UI I've never understood why more people haven't developed plugins that offer a UI alternative. I wanted a simpler UI for Maya so I wrote my own : https://github.com/shawnfratis/Scrimshaw-MEL-Mini-GUI-for-Ma... . It's not perfect but it works for my uses. I'd think a program as open as Blender would lend itself to something like that.
On Botnets and Streaming Music Services vice.com
99 points by 6stringmerc  11 hours ago   62 comments top 13
1
13thLetter 7 hours ago 1 reply      
The takeaway I'm getting from this is, as with other websites, the attempt to fund streaming music indirectly via targeted advertising is hopelessly unable to keep up with ever-more-clever click fraud. At best, we end up with an arms race of more and more powerful "criminal" botnets and more and more heavyweight advertising tech crowding out the original content. I'm becoming very sympathetic to the viewpoint of backing out towards either completely untargeted advertising (which, paradoxically, can be far more effective) or -- and, admittedly, I'm going crazy out on a limb here -- paying for content.
2
dontreact 9 hours ago 0 replies      
This seems to be another argument in favor of the model proposed here:https://news.ycombinator.com/item?id=9226497

Split revenue per individual amongst artists, instead of splitting total revenue amongst artists.

3
ChuckMcM 10 hours ago 1 reply      
I wondered about this, I figured someone had figured out they could use pretty standard click fraud techniques to milk money out of the pay-per-play ecosystem.

No doubt someone in operations over at Spotify spends their nights trying to detect these kinds of patterns. It would be interesting to hear their take on it.

4
SeanAnderson 9 hours ago 3 replies      
(I misread. It's 8/100th of a cent. Much more realistic.)

Artists on Spotify earn 8 cents each time their song is played? That figure seems really, really high to me.

I'm not especially surprised this is possible, but it comes as a huge shock that it would be financially profitable for someone.

A quick glance at some other articles (http://www.theguardian.com/technology/2015/apr/03/how-much-m...) shows drastically different figures:

"For example, Spotify says that its average payout for a stream to labels and publishers is between $0.006 and $0.0084 but Information Is Beautiful suggests that the average payment to an artist from the label portion of that is $0.001128 this being what a signed artist receives after the label's share."

This would make it much more expensive to run a botnet through AWS than any potential profits it could generate.

Some other thoughts after reading more closely:

- It's surprising that the minimum listen time required for payout is 30 seconds when average song length is 3 minutes (or even higher? A reported 3m 45s: http://www.statcrunch.com/5.0/viewreport.php?reportid=28647&...). Is listening to 3/18th of a song really enough to warrant payout? Maybe.

- The opening sentence isn't all that truthful. It's implying that an average user is just going to open Spotify, mute it, and go to sleep. That means they won't be there to skip every 30 seconds. So, we fall back to the 3 minute average. Assuming you sleep for 8 hours that means you're only going to get 160 plays or ~12 cents not 72.

5
mootothemax 9 hours ago 3 replies      
Isn't there potential here for a much more nefarious plan than merely earning revenue from fake listens?

If you could do the same thing across a few services, spreading the number listens out on a viral pattern, based on a bit of investment in highly marketable songs, it sounds like you could create a bedroom-singer rags-to-riches superstar story and potentially make millions upon millions.

6
mschuster91 5 hours ago 0 replies      
As long as Spotify doesn't make a loss on the payout per streamed listen event and the pay-in from advertising, I don't see any problem.

Spotify has a pretty much working monetization model, they could just tell advertising to fuck off. Their free model is like classic radio, where advertisers pay without knowing if there is one listener tuned in or millions (literally).

7
z3t4 7 hours ago 1 reply      
The difference between a "bot" and a real person is that the real person have money to spend. Now, how do you figure out if someone is a real person or not?
8
tracker1 6 hours ago 0 replies      
In the end I feel we need better captcha options.. images for most people with options for the impared. In the end stuff that's relatively easy for a person (click the picture of a cat), but harder for a computer to do...

Another option might be regular challenge-response that makes interaction harder and more costly for a fake listener.. having to run a pbfdk, scrypt or other result on a given input at regular intervals... (the service could have a pre-computed pool to randomly serve out, so they wouldn't have the same costs).

They could also flag accounts that get my than N hours of play in a day, or number of days that's much higher than a typical listener... or who plays more artists/songs outside the top 10k songs the previous month. Asking them to login to their account, or validate their email address at that point... Anything that makes the process much more complicatied to automate but would affect a very low number of real people.

Yes, it's an arms race, but there are a lot of things that could be done that could keep the barbarians out of the gates... Not to mention other suggestions that split per-user royalties to artists, instead of the pool as a whole... That combined with other models could go a long way here.

9
ZoF 7 hours ago 0 replies      
This is referenced in the article.
10
pandog 6 hours ago 1 reply      
There is really no need to call this a botnet
11
ryanlol 5 hours ago 1 reply      
I think calling this a botnet, albeit technically correct is both really silly and clickbaity.
12
acd 6 hours ago 0 replies      
You can filter this out. Using Spam filtering bot detection security methods.
13
hiou 10 hours ago 2 replies      
I feel like the writer of this article has a fundamental misunderstanding of Spotify's business model. The number of plays influences how much money Spotify brings in from advertisements. As far as they are concerned fake and real plays are not much different beyond maintaining credibility with their advertisers.
Leaked Pinterest Documents Show Revenue, Growth Forecasts techcrunch.com
58 points by prostoalex  8 hours ago   43 comments top 10
1
jedberg 6 hours ago 3 replies      
It's unfortunate that potential LPs can't be trusted to keep these kinds of things secret, because it makes it harder for founders to trust them in the future.

As an occasional LP myself, I always consider financial documents like this just as secret as my own bank statements and health records.

2
orthoganol 2 hours ago 0 replies      
$90 million revenue run rate this year and an $11 billion valuation...

Are VCs trying to rush everything onto the public while they can, or what? Slightly disconcerting when people say VCs aren't trying to do things like they did in the last boom.

Yes, I really don't think you can justify a valuation like that. It all depends on Pinterest having the popularity of Facebook in a few years, which is an incredible lie.

3
gitah 7 hours ago 1 reply      
The 2018 projection of $2.8 billion revenue with 329 MAU for Pinterest is roughly what Twitter will achieve in 2016.

Twitter has a market cap of 20B vs 11B for Pinterest, so there is plenty of upside if Pinterest hits 2018 numbers and goes public. That is assuming Twitter's current valuation is reasonable which is debatable.

4
pdq 3 hours ago 0 replies      
Both Pinterest and Twitter are getting hammered by Instagram in popularity over the past 2 years.

Have a look at Google Trends [1].

[1] https://www.google.com/trends/explore#q=instagram%2C%20twitt...

5
free2rhyme214 2 hours ago 1 reply      
Who knows what revenue numbers Pinterest will hit. Lucky for them their competition is nil.
6
djcapelis 6 hours ago 0 replies      
Anyone know if there's data on their expenses or is it all just a report on one side of the profit curve?
7
jgalt212 5 hours ago 1 reply      
> TechCrunch has obtained documents that show Pinterest has been forecasting $169 million in revenue this year and $2.8 billion in annual revenue by 2018.

So in three years, they'll have to grow revenues by 16.5X. These sort of outlandish growth assumptions necessary to substantiate their valuation is exhibit A as to why the a16z valuation is extremely suspect.

Of course, 16.5X growth is doable when you are starting from a low number, but $169MM ain't spit.

And all the being said, $169MM for 2015 is just a forecast, so if they miss on Q4 numbers, the 16.5X assumption can easily be 20-25X.

On the plus side, their revenues per active user target of $9.34 is relatively modest, and I think doable as FB does $4.18 per user per quarter in revenue.

8
hayksaakian 6 hours ago 0 replies      
Promoted Pins

anybody who's ever tried them knows that pinterest is doing very well

9
seibelj 7 hours ago 4 replies      
Sometimes I use pinterest when I'm looking for cooking inspiration. It is absolutely nothing like 4chan, I have no idea what you are talking about.
10
pftom 4 hours ago 0 replies      
Considering this revenue number, it really can't be valued at 10Billion dollars.
The Lonely Death of George Bell nytimes.com
145 points by Bud  13 hours ago   50 comments top 14
1
sakopov 13 minutes ago 0 replies      
It's so difficult when you're on a trajectory that you know will kill you yet you cannot change it because it feels like your entire being is so tainted and the sadness is so ingrained in your soul that your mind just tells you to isolate your miserable existence from those around you. I have struggled with this my entire adult life and I'm writing this with tears in my eyes because reading this beautiful piece is like unraveling my own future. Rest in peace, George. We all die alone, but no one deserves to die lonely.
2
asifjamil 8 hours ago 1 reply      
A wonderful piece, almost read like a novel.

At the end, I couldn't help but try to derive some overall take-away. The best I could come up with was to cherish your friendships and always try to keep in touch. One concern I would have is whether or not today's internet-based culture could hinder this?

3
hitekker 8 hours ago 1 reply      
https://en.wikipedia.org/wiki/Kodokushi is the Japanese name for this phenomena. There were several articles about it a few months back, like:

http://www.slate.com/articles/news_and_politics/roads/2015/0...

http://rendezvous.blogs.nytimes.com/2012/03/25/in-japan-lone...

http://qz.com/380685/photos-cleaning-up-after-japans-lonely-...

They say you mentally breakup with your spouse/partner some time before before actually do so. From what little I can glean here, the same can (not always, just can) hold true for dying alone.

4
noobermin 7 hours ago 1 reply      
Contrary to the implications of the beginning of the piece, Mr. Bell had connections, he had the potential for relationships, he even had friends who tried.

I'm an introvert, who recently moved to another city for graduate school. I don't get out very often unless it's to a solitary place (my own spot in a coffee shop, for instance), so I can relate to Bell's desire to be solitary most of the time. However, if anyone reading this thinks his death was a wrong, know that he had ample opportunity to at least have one or two people to "be around his death bed" so to speak. The end of the piece talks about "the Dude", not to mention the possible wife who it seems loved him even towards the end.

If there is any take away, it is to cherish our relationships with others. There are extreme cases in which people really are alone, but for those of us who can at least count on our hands the ones we love, or at least like, (even if it takes us a few minutes to enumerate them), we should continue to develop those friendships and not force people out who could otherwise enrich us. As the "investigators" put it, we don't live forever. We need to use the time we have to enrich others, because only the rest of society will out-survive us as individuals.

5
lucio 1 hour ago 0 replies      
> IN 1996, GEORGE BELL hurt his left shoulder and spine lifting a desk on a moving job, and his life took a different shape. He received approval for workers compensation and Social Security disability payments and began collecting a pension from the Teamsters. Though he never worked again, he had all the income he needed.

Sometimes, having income without effort is death in life.

6
cousin_it 7 hours ago 3 replies      
We need a society where every person is needed and wanted. We need it more than any dumb technical advance that lessens the need for people, like robots delivering pizza or whatever. That will take some heavy thinking, though, and I feel that we haven't even started.
7
gammafactor 7 hours ago 2 replies      
I never understood this stigma about hermits/social isolation in western societies. In the east, it's much more accepted.

Reading this piece I got the impression that the writer and persons interviewed were HORRIFIED by what they discovered.It seemed to me as if "having friends" was pretty fucking high in their list-of-important-things-in-life.

I was very amused by this, almost started laughing in fact.Human relationships are not for everyone I'd say, in fact there are many healthy people that view them as pointless waste of time at best.

Technology will solve this little problem once and for all, in this century. I won't go as far as AGI but when house keeping robots become ubiquitous which is surely less than 20 years down the road, society at large will have to evolve and primitive points of view as described in this nytimes piece will basically disappear.

8
goodJobWalrus 3 hours ago 1 reply      
To me, the saddest thing about George was that he seems to have merely existed, while he was alive. From the article, he didn't seem to have any passions or ambitions. He didn't seem to want to go and see things, experience things, do things. He died in the same apartment, he was born in. He didn't really live, he just was, completely passive, until he wasn't.

At least that is the impression I got from reading this.

9
jiantastic 7 hours ago 0 replies      
This is a really lovely story. I'm not usually one to be sentimental bit this has got me thinking of my priorities in life.
10
Sven7 1 hour ago 0 replies      
$5000 for burial costs?!? And 50000 burials a year...wow...would never have imagined.
11
bbanyc 6 hours ago 0 replies      
I couldn't help but think of Eleanor Rigby. "All the lonely people, where do they all belong?"
12
slicktux 4 hours ago 0 replies      
This was somewhat of a depressing story; mainly because emphasis was put on the whole after death scenario; that is what happened to his belongings et cetera, et cetera... But overall, I feel like I understand Bell, as well as the circumstances that lead to him dying lonely...RIP BellNothing moreNothing less
13
schroningerscat 2 hours ago 0 replies      
Indeed, a deeply touching and human story, but even more so I've been engrossed by the consuming discussion it has sparked!
14
Animats 5 hours ago 0 replies      
That's how it ends. Everyone dies alone.
Students in Switzerland build a wheelchair that climbs stairs venturebeat.com
37 points by serengeti  8 hours ago   10 comments top 5
1
Paperweight 4 hours ago 0 replies      
The inventor of the Segway first invented a far superior stair-climbing wheelchair, the iBOT, way back in 2003, but it was regulated out of existence.

http://www.hizook.com/blog/2009/02/11/ibot-discontinued-unfo...

However, the FDA recently reclassified it so it may be put back into production within a couple of years.

http://www.fda.gov/downloads/AdvisoryCommittees/CommitteesMe...

2
DIVx0 5 hours ago 1 reply      
I wonder how well this would work on carpeted stairs? Since the treads are gripping only the 'nose' of the stair is there the possibility of slippage on the pile? What if the padding gave way or the treads pulled the carpet away from the stair?

How well would this chair recover from scenarios like this? Or in any failure for that matter.

It certainly looks like it has a lot of promise!

3
devit 5 hours ago 2 replies      
As mentioned in one on the comments on the article, a patent for something very similar was filed with 1988 priority in the US: https://www.google.com/patents/US5123495

Wonder why it hasn't made to the market in the 27 years since that patent?

4
Avshalom 4 hours ago 1 reply      
Was anyone paying attention when the Dean Kamen wheelchair stopped being made? Was it cost/demand, didn't work? The wikipedia article says something about the FDA reclassifying it but that doesn't seem to explain why it stopped being made.

https://en.wikipedia.org/wiki/IBOT

5
c3534l 5 hours ago 1 reply      
That video is really melodramatic. It's a guy slowly ascending a staircase with these crazy dynamic shots and music that would be more appropriate for a compilation of airshow maneuvers. That said, it doesn't look technologically like it's all that interesting unless they've managed to make it super-cheap.
Streaming video on 10 Gigabit Ethernet and beyond bbc.co.uk
64 points by howsilly  11 hours ago   50 comments top 5
1
pcunite 24 minutes ago 0 replies      
Linus Tech Tips talked about the difficulties in getting 10 Gigabit working.

https://youtu.be/D03t890dKTU

2
mixmastamyk 8 hours ago 6 replies      
Interesting, reminds me of a related question. I've looked recently for 10 gig ethernet on a new laptop and haven't been able to find it.

I know it is overkill, its just that it has been about ten years already, isn't it cheap enough yet? Can't a modern ssd keep up with it?

3
nly 10 hours ago 1 reply      
Kind of arcs back to the days when people were putting HTTP servers in kernel space. Slightly different tac though
4
jand 10 hours ago 4 replies      
My state of knowledge leads me to think, that bypassing the kernel requires some non-blob network drivers with which you can tinker around. Am i mistaken?

So right now, i am missing the information on what kind of NIC they were using. Any thoughts or comments on that HN-community?

What vendor and product model would be a reasonable entry point for such endeavours? Answers very much appreciated.

5
p1esk 10 hours ago 3 replies      
Is a single CPU core able to process 4k/50fps video stream? Or is there no need for any processing, other than encapsulating it into data packets for sending to the network card?
Several types of types in programming languages arxiv.org
62 points by chesterfield  10 hours ago   22 comments top 4
1
jasode 8 hours ago 1 reply      
Over many years of reading various essays on "types", this is the list of synonyms I've accumulated:

="constraints"

="rules"

="policies"

="contracts"

="semantics" (separate concept of MEANING from underlying DATA BIT representations)

="compatible semantics"

="capabilities"

="propositions" (and functions can be "theorem proofs")

="facts" about code that compiler attempts to "prove"

="tags" or "metatags" for compiler checking

="documentation" that's enforced by compilerspecify what is NOT ALLOWED e.g. "you can't do that"

="substitution", "compatibility"

="A set of rules for how to semantically treat a region of memory."

Because the list is synonyms, many concepts overlap. In my mind, the highest level of unification about "types" is the concept of "constraints". Related topics such as polymorphism is a practical syntax pattern that arises from data types' constraints.

Personal anecdote: I initially learned an incomplete and misleading idea about "types" from K&R's "The C Programming Language". From that book, I thought of "types" as different data widths (or magnitudes). An "int" held 16bits and a "long" held 32 bits, etc.

It was later exposure to more sophisticated programming languages that a much richer expressiveness of "types" is possible that the C Language "typedef" cannot accomplish. For instance, if want a "type" called "month", I could encode a "rule" or "constraint" that valid values are 0 through 11 (or 1 to 12). A plain "int" is -32768..+32767 and having "typedef int month;" provides a synonym but not a new policy enforceable by the compiler.

2
asgard1024 8 hours ago 1 reply      
I have realized for quite some time now that there are at least three different uses of types in programming languages (with different goals), namely:

1. Specification of constraints on data - the function arguments and return values. This corresponds to usage of types in logic. The goal here is the correctness of the program (and ease of reasoning about it).

2. A way to define polymorphic functions, i.e. functions that do the same or similar operations with different kinds of data. Here we classify data as of some type so that we can define two functions with the same name, but different types, where the correct one is selected either during compilation or run time based on the parameter type. The goal is conciseness, to avoid explicit conditional statements everywhere or other ways of code duplication.

3. Finally, to specify way how the computer is to store data, or generally how to model abstract concepts (such as integers) within the computer. This becomes less relevant with time (as languages increase in abstraction), but is important use historically. For example, integers that can be modeled with many different types in C. The goal here is to have type as a reference to concrete and efficient representation of the abstract concept.

I think this is pretty much what the paper is suggesting, although IMHO not so clearly.

3
chubot 7 hours ago 0 replies      
How does this compare with this paper? They are both discussing the meaning of the word "type".

http://dl.acm.org/citation.cfm?id=2661154

There was a good blog post by Stephen Kell that I can't find now (https://www.cl.cam.ac.uk/~srk31/blog/)

"The concept of "type" has been used without a consistent, precise definition in discussions about programming languages for 60 years. In this essay I explore various concepts lurking behind distinct uses of this word, highlighting two traditions in which the word came into use largely independently: engineering traditions on the one hand, and those of symbolic logic on the other. These traditions are founded on differing attitudes to the nature and purpose of abstraction, but their distinct uses of "type" have never been explicitly unified. One result is that discourse across these traditions often finds itself at cross purposes, such as overapplying one sense of "type" where another is appropriate, and occasionally proceeding to draw wrong conclusions. I illustrate this with examples from well-known and justly well-regarded literature, and argue that ongoing developments in both the theory and practice of programming make now a good time to resolve these problems."

4
chadaustin 8 hours ago 1 reply      
Wow, apropos! I was just thinking about this in the shower.

For most of my life, I equated types with sets of values, but after learning Haskell and working with higher-kinded types, type classes, and existential types, I realized I don't know anymore what a type _is_. I know that type systems provide proof that certain classes of operations are impossible (like comparing a number to a string, or dereferencing an invalid reference).

It's pretty mindbending to use existentials or GADTs and pull two values of a record and not know anything about those values except that, for example, they can be compared for equality.

 data HasTwoEq = forall a. Eq a => HasTwoEq a a equalOrNot :: HasTwoEq -> Bool equalOrNot (HasTwoEq x y) = x == y
The example is contrived, but it illustrates the point that the types of x and y are not known, _except_ that they can be compared.

That's not the kind of thing you can express with, say, Java or Go interfaces, but it makes perfect sense once you start to break down the mental walls you've built in your head over the years.

I'm thrilled to see a growing body of accessible* PL and type theory literature, because these things are important to helping us develop software at increasingly large scales, and it's clear that very few people -- including myself! -- know enough about this topic.

* e.g. https://cs.brown.edu/~sk/Publications/Books/ProgLangs/2007-0...

I tried to do my part here: http://chadaustin.me/2015/07/sum-types/ and http://chadaustin.me/2015/09/haskell-is-not-a-purely-functio...

Show HN: Mechanical clock simulation in WebGL nikital.github.io
34 points by nikital  6 hours ago   6 comments top 5
1
Renaud 1 hour ago 0 replies      
That was a useful animation: it made me realise that I didn't really know how a clock worked; I only had a fuzzy -and wrong- understanding of how the the parts worked together.

Good use of visualisation and animation is an extremely powerful tool for triggering that aha! moment.

2
dhritzkiv 2 hours ago 0 replies      
Zooming-in is a bit tricky as a page scroll is triggered. Might help to add

 event.preventDefault();
to the wheel event listener function.

3
gus_massa 5 hours ago 1 reply      
Nice animation. I'd add the option to show the speed of each gear (a curved arrow around the axis, with a label "1 turn per 15 seconds", and perhaps the number of cogs in each gear).

I'd like to see one additional simple version with only one gear, so it's easy to understand how the balance / anchor / escape work.

Another nice version would be a linearized version, where all the gears are in a row, so they are easy to see. Bonus points for a smooth transformation from the linearized version to the actual version.

4
kqr2 2 hours ago 0 replies      
Nice! Can you give more details on how you made the simulation?
5
callesgg 6 hours ago 0 replies      
cool
A 15-Year Series of Campaign Simulators vice.com
31 points by coloneltcb  9 hours ago   discuss
The Confessions of @dick_nixon vox.com
45 points by seventyhorses  10 hours ago   9 comments top 3
1
thesteamboat 5 hours ago 1 reply      
The introductory quote comes from Hunter S. Thompson's eulogy of Nixon in the Atlantic Monthly.[0]It is very enjoyable read (though not balanced in any sense of the word).

> Nixon had the unique ability to make his enemies seem honorable, and we developed a keen sense of fraternity. Some of my best friends have hated Nixon all their lives. My mother hates Nixon, my son hates Nixon, I hate Nixon, and this hatred has brought us together.

> Nixon laughed when I told him this. "Don't worry," he said, "I, too, am a family man, and we feel the same way about you."

[0]: http://www.theatlantic.com/magazine/archive/1994/07/he-was-a...

2
cbd1984 7 hours ago 1 reply      
Immediate response:

> President Clinton young, smart, dynamic, the first president whom I understood politically (one of us, I thought) demanded that Nixon be judged on nothing "less than his entire life and career."

Notice how this is neither a commendation nor an exoneration, attempted or otherwise. Love him or hate him, you have to admit that Bill Clinton knew exactly what he was saying at any given moment, and what those words meant.

Also, this is golden:

> Remember ... the far-right kooks are just like the nuts on the left ... but they turn out to vote.

Thus we have the Southern Strategy, which leads directly into what the GOP is now.

3
hebdo 7 hours ago 2 replies      
Serious question, however stupid, ignorant or offensive it might sound: why is the following anti-semitic? Because it was false in Nixon's times? I'm not that familiar with the history of the United States in the 70s.

You know, it's a funny thing, every one of the bastards that are out for legalizing marijuana are Jewish. What the Christ is the matter with the Jews, Bob? What is the matter with them? I suppose it is because most of them are psychiatrists.

The Universal Design christine.website
26 points by xena  10 hours ago   4 comments top 2
1
yarvin9 1 hour ago 1 reply      
Xena is right to note that this simple state-transformation function, f(event old-state) -> [actions new-state], is the obvious and eternal way to build a server. It's also worth noting that it's basically Haskell's State monad.

Whatever you call it, it's a design pattern, not a system service. Urbit on the outside, as an "operating function," is defined by the "universal design." But since it's a pattern rather than a service, it will tend to reappear at each layer of a layered system. Urbit on the outside uses this pattern and so do Urbit applications, but these are very different layers of Urbit.

At the application layer, a command to an IRC daemon is a great example. There still seems to be a good deal of boilerplate in the Lua code presented here. The attraction of the "universal design" is the goal of eliminating all, or almost all, boilerplate code in a network server daemon.

How would one go about that? First, be a single-level store, so the daemon is automatically a database. Second, make application messages are transactions with end-to-end acknowledgment, no return value, and exactly-once delivery. The message is automatically deserialized and validated, and passed to the application as a simple typed argument. If there's a problem with the operation, just crash; the result is delivered as a transaction failure with an annotated stack trace. Also, messages should be sent over an encrypted P2P network and authenticated by scarce, memorable identities...

On the other hand, I'm sure the Lua daemon is a lot faster!

2
aji 2 hours ago 1 reply      
>This design will also scale to running across multiple servers, and in general to any kind of computer, business or industry problem.

If I understand what you're getting at, you're saying that locking is the solution to all concurrency problems? This section is the most interesting to me, as I've been researching concurrency at a high level lately. I'm a little confused by your conclusion. It seems nave to claim that multiple hosts can agree on what action to take "just" by using locks. What if a peer is holding a lock and becomes unreachable? What if the peer isn't dead and thinks it still has the lock? What if the core that is issuing the locks becomes unreachable?

The Deals That Made Daily Fantasy Take Off wsj.com
22 points by prostoalex  8 hours ago   5 comments top 3
1
mhartl 5 hours ago 2 replies      
I cofounded a daily fantasy sports site in 2004, so it's funny to see FanDuel's 2009 launch date described as "early to market". But the environment for fantasy sports in 20042005 was incredibly different, with the major leagues often showing overt hostility (including the NFL Players Association suing a company for using the players' names without permission). It's yet another example of how big a factor timing can be in the startup game.
2
1123581321 4 hours ago 0 replies      
Interesting that the revenue share of winnings is projected to remain 10%. This suggests players don't value playing on a site that takes a smaller percentage of the game - or that the sites collude. Do any of the companies that occupy the 5% of the market not Fanduel/Draftkings try to differentiate based on cost?
3
pcprincipal 4 hours ago 0 replies      
So it's perfectly kosher to list big-time "investors" when your company gives them free equity? I was always under the impression that MLB, NFL, etc. voluntarily ponied up to invest in DraftKings.
Staging, Manipulation and Truth in Photography lens.blogs.nytimes.com
17 points by nkurz  7 hours ago   1 comment top
1
jlarocco 4 hours ago 0 replies      
The title should really use "Photojournalism" instead of "Photography," IMO. Outside of photojournalism, staging and manipulation are a large part of photography.
Structural Typing for Clojure github.com
49 points by ahjones  10 hours ago   12 comments top 7
1
escherize 51 minutes ago 0 replies      
It would be interesting to see a "When structural-typing is better than prismatic/schema" section in the readme. I read the readme, and I don't see why you can't use something like s/validate instead of built-like. So instead of:

 (type! :Point (requires :x :y)) (some->> points (all-built-like :Point) (map color) (map embellish))
To keep it simple, as in the first example of the whirlwind tour, we could use:

 (def Point {:x s/Any :y s/Any}) (some->> points (s/validate [Point]) (map color) (map embellish))
It's good that structural-typing does not use macros, instead building ontop of specter which is pretty cool. So is there a performance enhancement?

It's clear that one should use structural-typing where an expertise with specter has been attained, Not sure when else it's the best choice.

2
lbradstreet 6 hours ago 3 replies      
I've had good luck with prismatic schema https://github.com/Prismatic/schema, which seems to be along a similar direction. It's fairly low commitment and can lead to big gains fairly quickly. I assume this would be similar.
3
seivadmas 6 hours ago 2 replies      
Important to note that this isn't STATIC typing (which detects errors at compile-time), rather this is more like a validation library to make sure structures have certain properties at run-time.

Not saying it isn't useful, in fact I have a project in which this would be a very good fit and I might even implement it there.

4
bmh100 9 hours ago 0 replies      
This seems to me like a very practical way to approach Clojure typing. Similar to the author, I have often needed to make a series of transformations on complex objects. Those transformations mostly depended on the presence of certain keys, so having strict types was unnecessarily rigid. Derived typing would also be useful, of course.

One point to appreciate about this approach is the flexibility in only worrying about the relevant pieces. In my case, I may be worrying about whether a continuous variable has been tagged "datetime", requiring additional processing steps. Merely checking for such tags allows the input data to implicitly direct the flow of the program, reducing the coupling between data and specific processing implementations.

5
dj-wonk 6 hours ago 0 replies      
I've also been thinking about types, validation, and structure. I see room for various approaches, including static typing, validation, and more. For example, here's an in-progress library that I plan to build out soon: https://github.com/bluemont/shape
6
twsted 5 hours ago 0 replies      
"somewhat inspired by Elm".We are seeing a trend here.

Speaking as one who is trying to use more functional languages and starting to love Elm.

7
lkrubner 6 hours ago 0 replies      
Our software system currently uses Redis as a central bus, and around that we have a dozen apps that send a hashmap back and forth among themselves. We use Carmine/Nippy to serialize and deserialize the hashmap, so we never have to think about anything other than a hashmap. All the bugs we face are because of missing or misused fields in the hashmap. For us, a combination of structural typing and Nippy could potentially protect us from 90% of the bugs we have seen so far.
Humpback whales synchronize their songs across oceans medium.com
46 points by dang  10 hours ago   11 comments top 4
1
kragen 6 hours ago 0 replies      
It looks like the sonograms are full of harmonics. The conventional musical notation for a note with rich harmonic content, such as a single pluck of a guitar string, is not a vertical line on the staff with notes at every harmonic; instead, you just indicate the pitch of the fundamental. (Even if the fundamental itself is mostly missing, like in the low notes on an upright piano, that's where you put the note.) Then, notes with different harmonic content (because they are played on different instruments) are plotted on different staffs, although this might be counterproductive for visualizing whale songs. Colors are probably better for that.

It would be interesting to see if a second-order Markov model of the whale song unit sequence finds information that is not captured in a first-order model. More interesting still would be if a stochastic context-free or pushdown model were able to predict whale songs better than a similarly-complex Markov model, as it would indicate that the whale song has a recursive structure, like human language.

It makes some sense that you would use a long, highly-redundant transmission of a sequence of discrete symbols, which then you would repeat after hearing, to distribute information of general interest around the ocean, where travel is slow and the latency-bandwidth product is high. The researchers speculate (largely on the basis of sexual dimorphism) that the information communicated is merely fashionbut surely there is some generally-useful, temporally-changing information of interest to humpback whale survival and fecundity.

2
Pyxl101 8 hours ago 1 reply      
How can we attempt to pull patterns out of the song?

What if the songs actually contain the whale equivalent of GPS coordinates? How would we detect it?

I'm sure a few people have spent many hours trying to do so, but I wonder if machine learning could help. It would be a challenge: we'd need factors to correlate to, like the whale's position or information about their environment (location of boats, pollution, or prey).

Perhaps a start would be triangulating the whale's position during each song, and looking for elements that somehow vary with location. I imagine someone has looked for this. Location might not actually be a good thing to look for - whales can presumably determine each other's location from the sound source and distance alone, like a human could hear the direction and distance of a shouting human. What else might they be communicating?

3
jMyles 8 hours ago 1 reply      
It really is incredible. It's worth giving serious consideration to pausing human activity in the oceans until we really understand what the whales are doing and saying.
4
callesgg 7 hours ago 1 reply      
I would have liked the article to come to some from of conclusion.
The History of American Surveillance revealnews.org
25 points by pmcpinto  9 hours ago   4 comments top
1
heroprotagonist 2 hours ago 2 replies      
I wouldn't mind seeing sources for some of the theories posited here. For example, I searched briefly for other sites that mention William Howard Taft's use of blackmail and wasn't able to find any. It wasn't an exhaustive search, to be sure, but the lack of information reduces credibility in my eyes.
       cached 18 October 2015 04:02:05 GMT