Was wondering about that. I'm surprised the OpenBSD team is accepting the commits - something so fundamental and Windows specific doesn't seem like their kind of thing - but great!
PS. If you're coming from a Unix background and interested in learning posh: https://certsimple.com/rosetta-stone
Best way to address POSIX compatibility concerns is implementing a proper POSIX layer in Windows (and not in a half-baked manner like the now deprecated SUA). I can't imagine how it would hurt anybody.
EDIT: I misread this and though it was only a client. Geez. If it's a server, then of course it should be a service.
My understanding is that roundabouts are best when traffic is low, when it gets heavy they become much less efficient. Lights, while a lot of overhead in low traffic, scale better. There's also the question of space required.
Since the UK may have too many roundabouts due to traffic growth, and the US virtually none, they could both be building the opposite and making the right choice.
As a driver, I much prefer roundabouts. When I'm cycling, I just prefer roads with fewer cars and can't wait for self-driving cars, since at least they won't be driven by idiots.
There are safer roundabout designs used in Holland: http://www.aviewfromthecyclepath.com/2014/05/the-best-rounda...
As the text acknowledges, a lot depends on the design of the roundabout. Just last week, we got https://bicycledutch.wordpress.com/2015/10/13/explaining-the..., which explains the current thinking of the dutch on roundabout design.
Also, roundabouts have become more popular, but traffic deaths certainly haven't gone up in the Netherlands (https://www.swov.nl/rapport/Factsheets/UK/FS_Road_fatalities...).
So weird to see the reason we have them is due to our Mayor visiting England many years ago. Now they are going the other way. Did their mayor or someone visit us and fell in love with traffic lights?
However, as traffic at any particular intersection increases it eventually may make sense to replace a roundabout with a traffic light. Roundabouts can handle large amounts of traffic, but only if they're large roundabouts -- if you can't expand the actual road space then the traffic light is the best move.
groff still going strong... although it seems like Kernighan got tired of drawing using the 'pic' language...
"But it has comparatively few features and is unlikely to addmore. For instance, it has no implicit numeric conversions, no constructors or destructors, nooperator overloading, no default parameter values, no inheritance, no generics, no exceptions,no macros, no function annotations, and no thread-local storage."
- What is Go well suited for other than network programming? - Why might one decide to write the backend API of their web app in Go, compared to say Grails, Python etc.
It now seems to be recognized that, in Go, if you want to lock shared data, use the lock primitives. Don't try to construct locking primitives from channels; that's error-prone and hard to read. This new manual seems to recognize this. When they want a shared counter, they use a shared counter with traditional locks.
Which language/framework would you choose today for writing WebServices? Preferably with the following characteristics: static type (or at least static analysis), easy deployment (ex, generates a single binary like in Go), supports concurrency very well, is small/simple, has good tooling and debug support, and is fun to write. Go (except for good debug support)? Elixir (dunno how good it is with deployment and debugging)?
The book is well written and it looks like it covers a lot of common topics, I think I'm gonna buy it.
Book homepage is here: http://www.gopl.io/
mods, please fix
Several free full Go eBooks listed at http://hackershelf.com/topic/golang/
I'm not sure I can get behind that without generics.
I really wish they would beef up this portion of the book.
- Mathiness is truthiness, but in the way that putting down any mathematical symbols lends an air of intelligence and truth to an argument, for many people, especially those who don't or can't take the time to understand the argument logic.
- Then he explains how economics data are not as easy to construct with universal agreement as with, say, physics (temperature). Agreement on practical action is hard when contrary frames and interpretations appear valid.
- He then mentions how this might be a problem for defining capital and other constructs for Piketty, but without any useful argument beyond this assertion.
- He then contrasts Feynman's integrity, trying to disprove his own work to make it better, to economist George Stigler's rhetorical style of conviction, ignoring contrary arguments, and playing the polemicist.
- He then mentions Isaiah Berlin's distinction between foxes who know little about something , and hedgehogs who know one big thing.
All these are interesting frames by which to compare and contrast various things. Yet his analysis, after bringing in all these ideas, is just to say that economics needs both careful analysis and effective rhetoric. Well, duh, but how does this tie into all the great setups he's made so far?
And mathy people are good at neither rhetoric nor polemic? But surely, if you're afraid of these people, they would be a mathy person who is using rhetoric to undermine the real philosophy and logic that should - pace Plato - be informing the argument.
It's a post full of story and setup, but as yet, signifying nothing.
Every careful person equipped with a reliable thermometer will make the same reading of temperature. There are alternative scales, Fahrenheit and Celsius, but both record the same thing...
Economics is genuinely harder. National income is a more complicated concept than temperature, and there are plausible alternative sets of rules for calculating it. Serious minded statisticians have spent many years discussing these issues, and there is now a UN-sponsored standardised system of national accounts.
But it is easy to write a mathematical symbol without giving thought to what observable fact in the real world corresponds to that symbol, or whether there is such an observable fact at all.
But isn't that exactly how we settled on the truth of temperature? Years of debate about what the right constructs for defining temperature mathematically are?
People give a lot of authority to mathematics, because mathematics is immensely powerful to explain the natural world. This unfortunately leaves the opening for charlatans to do fake math and lend their garbage research a completely unwarranted air of legitimacy.
I am having technical difficulties preventing me from directly checking specific links.
Yes, you see, there are ideologically-driven economists like Piketty, who have a political stance. Then there are are the fair, neutral, unbiased economists who disagree with Piketty, who are only motivated by the search for the truth.
It might be cheap enough and practical enough for locals to install themselves in the places where the low humidity and cooler ground temperatures would allow it to work. But if you put this device anywhere between say 35 degrees north and 35 degrees south it probably won't work very well because the average ground temperature will be too high.
Wait, what? How is that the protocol? There's no two way validation at all? The chip just says "yes"?!
Can anyone with knowledge of details confirm? This seems isomorphic to my ears with "the PIN is just security theater".
Another thing, in context of USA, is that the authentication being done isn't much of a vulnerability as this only applies to offline chip transactions. In the USA (I believe) and here in Canada, all transactions are online, which means the pin will be rejected by your financial institute's back end systems in these scenarios.
These types of hacks have since been corrected using what is called CDA (Combined Data Authentication). Blurb on SDA/DDA/CDA here: http://www.cryptomathic.com/hubfs/docs/cryptomathic_white_pa...
Edit: Many Canadian financial institutes still use the weakest data authentication (SDA) because all transactions go online - spoofing a card PIN verification response doesn't fool the back-end system. Visa and Mastercard both have mandates to have newly issued cards be provisioned on chips with CDA (I believe, could be DDA which would still be susceptible to this attack).
Edit 2: When I say "offline", I mean at a point of sale machine - the POS does not reach out to the payment network to perform an "online" transaction where the PIN and card are validated by the back-end systems.
Edit 3: The article doesn't give EMVCo any credit for actually solving the issue before any real world hack was known to exist.
They say nearly 600k Euros were charged, but given the sophistication of the attack, I wouldn't be surprised if we hear later that it was in use at different locations as well, and we just aren't hearing about it because they haven't caught those people yet. They only caught these ones because they kept going back to the same locations.
Security by obscurity. That's always a good plan. I'm sure that folks who went through all this trouble to design this hack wouldn't ever be able to find that information. </sarcasm>
Besides you can just use the chipped card online without the chip or pin?
Away from being a proprietary tech, I'm not sure why fingerprinting the magnetic stripe never took off. It seems so much simpler, and if you cannot rearrange iron at the molecular level impossible to replicate.
We been using it for a large dataset, and has been fantastic. Compared to MySQL 5.7 and PostgreSQL, it have the advantage of supporting the TokuDB engine out of box. My data uncompressed is 3TB, with it , we can fit in 300GB with all indexes.Read Free Replication with TokuDB (https://github.com/percona/tokudb-engine/wiki/Read-Free-Repl...) also enable us to have a very cheap VPS as slave.
Pretty frustrating. I was pretty excited since MariaDB is a drop in replacement for MySQL (more or less). But you have to use it on a totally different architecture which can't be justified without a lot of deliberation.
But I don't see any patches. Would love to see something like this developed.
Also isn't there cstore_fdw for PG for compression? Although I know there are limitaitons with this, which has prevented me from using it. :(
Pg, have a lot of tractions this days, I use it for quite a while, but I never use mariaDB. So, I will be quite curious about features wise :)
We're also hosting a public forecasting tournament for him and his team that focuses on geo-political forecasting: https://www.gjopen.com/
As the article states, nobody outside of the business will ever hear about 99.9% of successful intelligence work done, and 90% of the people inside won't hear about it either. So unless you are directly involved in analysis, colletions or operations, it's impossible to get a good feel for efficacy.
The same thing seems to happen in other lines of work - when an engineer is accountable to code review, he might do better. When an author is subject to the will of an editor, his work is ultimately achieved faster, and with better quality; witness George RR Martin's speed on the first three books in A Song of Ice and Fire versus the glacial pace of the last two.
That's not how it works. If you consistently misreport your calibration, you're still miscalibrated. Consumers of the intelligence would (should) notice the miscalibration and correct for it, regardless of direction.
Who the hell is running their analogy department?!?
"Thirty years! That's a big number! We need some kind of comparison that will instantly give the reader perspective. Of course, in this, as in so many other things, the answer lies in pachydermian pregnancy patterns."
Nilotinib was "designed" as a leukemia treatment, but ... didn't go so well. Oops, just kidding - it's a parkinsons/dementia treatment!
(also, one of the erectile dysfunction drugs, I forget which one, was also "designed" for something similarly unrelated...)
The real headline here is that these people have no fucking idea what they are doing. They are throwing shit at the wall to see what sticks and then informing the marketing department as to which side effects were the most interesting.
This dovetails nicely with their other favorite activity: finding efficacious natural compounds and then tweaking them just enough into a synthetic compound that can be patented.
 "Interim results showed Tasigna is unlikely to demonstrate superiority compared to Novartis's Glivec (imatinib)*, the current standard of care in this setting." (wikipedia)
But despite the apparent striking effects, doctors have cautioned against great expectations for the drug at this stage as there was no control group or placebo used in the study for comparison.
Heh, of course. But right now it could be used as an off-label treatment?
In a decade we will look back and think "how could we be so stupid for decades".
And I'm looking forward when "harder" drugs like cocaine, meth and heroin are legalized or at least tolerated - no more drug turf wars, no more shadow states in Latin America, and especially far fewer deaths and medical issues from contaminated drugs.
Drugs are actually pretty safe, what makes them so incredibly dangerous is the contamination happening in the supply chain, from unclean manufacturing over normal degradation due to improper handling to stretching at the dealer level.
I oppose these efforts wholeheartedly even when I agree with the aims. Unintended consequences are the order of the day when you dabble in saving people.
What? If there is a large illegal markets for alcohol sales, how can tax stamps be considered an indication of alcohol sales?
I see the NRA adopting something more akin to pro-choice advocacy - a zero compromise scorched earth policy, where any attempt to discuss banning the most extreme products (late term abortions, unlicensed fully automatic weapons) results in arguing uphill against a slippery slope.
(disclaimer: I am an NRA member)
The repression involved in the implementation of alcohol prohibition had an effect of pushing a common and integral part of society into the black market.
The NRA is promoting the right to bear arms, and actually it would seem that the effect they're seeking should be the exact opposite.
Effectively, as prohibition pushed a polite leisure into the black market. The NRA's goal appears to be the exact opposite: that is, to keep a polite leisure out of the black market.
I don't think German understands the irony of this, because I get the impression that they want to compare the moral status of the prohibition movement to the moral status of the NRA.
You it it right there.
I had my PC setup with all the file sharing programs, hammering my cable modem 24/7, downloading movies and music. DVD's were $10 and audio CD's were $5.
My main competition was bootleg movies on regular CD's that you could buy at flea markets, gas stations and party store but they had terrible quality. Not just because they had to fit on a CD but many of them were shot in the theater with a handheld video camera, not rips from IRC channels. They were only $5, though.
It was way too much work driving around dropping discs off and I couldn't use my laptop for learning Cocoa, so I stopped doing it after a few months.
Someone needs to beat this guy over the head with a copy of The Elements of Style, or at the very least confiscate his thesaurus.
One thing I noticed about the video introduction is the speaker is explicitly talking to the viewer as if he or she is not the target audience, e.g. "while you and I may take this for granted ..." and "the disabled are exactly the same as you and me ..."
And it looks cool. Completely the opposite feeling of watching someone on a Segway, which could make even the coolest person look like a mall cop.
But at the end of the day they most optimal recipe for success still is
1) Find a real problem2) Build a solution3) Start selling
There are alternatives to growth-hacking and content marketing and what other tricks are out there.
Just look around you there are real problem everywhere where the solution doesn't need a marketing budget. It just needs to make itself known. And it's revenue from day one.
Love every single second of this.
I think there's a shift that hardware oriented entrepreneurs might mine for some ideas.
Around web 2.0 time there was a shift where people got more comfortable with the internet. They used real names, and pictures without expecting this would inevitably lead to serial killers at the door. Facebook worked because people agreed to tell the internet their name. Online dating went mainstream. Twitter, Linkedin, all sorts of sharing become common. The interesting part is that the technological trends like were only part of the picture. Cultural shifts were just as important.
Tech is cool now, that's the new trend. Where a calculator watch in the 90s would get an 8 year old beat up, todays equivalents are status symbols. Interestingly, glasses became cool in recent years.
So, ideas might be found by looking over old technology that is uncool and seeing if it can be re-imagined as 2015 tech. A regular electric wheelchair is uncool. This segway thing is cool.
One real obvious device to think about Apple-ising is hearing aides. Hearing aids are so uncool 80 year olds don't want to be seen with one. They are all about being small, flesh colored and "invisible." I think there's a decent chance a bright green large ear piece might be cool.
And speaking of hearing aids Can hearing aids improve the hearing of non impaired people. Can you get better than normal hearing from a hearing aid?
That will work for people whose core muscles work, and don't flop or twitch. Which is not everybody. But still a nifty thing.
On the other hand, being European, I... applause him for not displaying the (mandatory) cookie header.
At least for simple navigation I think the tech is mature enough to make it today. Just make sure to avoid obstacles and people and find your way from A to B.
Couple that with the Google car fitted with an automatic docking station and you have an almost complete system of transport.
The example use-case on the video is a paraplegic man. I don't think this is meant for people too lazy to walk on the sidewalk.
A trite and cheesy observation... it seems we are trying to make machines learn to walk and humans learn to roll :)
All in all I think the product/idea are great.
The problem you need to solve is getting the insurance companies and Medicare to pay for your device. You need lobbying and certifications and all that bureaucracy. No matter how mediocre your product is, you can then sell it like hotcakes.
Also, is this how New Zealand accent sounds like?
Imagine dropping something and instinctively leaning over to catch / retrieve it.
(I'm probably going to get downvoted for this, but hey, I like to live a little)
Maybe they'll exist by the time I need one.
This was in Rome which isn't known for its nicely paved streets.
- Regulating heavily crowdfunding rendering it unusable
- Eliminating stock options' fiscal incentives 
- Taxing the sun in order to protect energy lobbies from solar energy. 
- Trying to get a cut from Google News because they're worth it, pass a law for it, and then trying to retract in only few weeks once they saw the page views metrics go south...
Hopefully things will get better once the current ruling party leaves the government in the upcoming December elections. Hopefully...
My hypothesis is that the absolute lack of jobs for young people would lead to more startups as necessity is the mother of invention, but it could be that the more immediate day-to-day of putting food on the table might distract from that.
$7 dollars a month to run a little app? That's less than I spend on coffee each day and I'm glad that money is going towards differentiated and valuable service.
I have a couple of toy apps, a personal blog and a "not such a toy" app on Heroku and I have used their services for free for AGES - more than fair enough that they make some money.
That said; for hobby stuff I've switched to a cheap VPS plus Dokku  and haven't looked back. Works with Heroku buildpacks and even runs custom Dockerfiles if you need anything fancy. Hardly requires any maintenance.
There's a couple of alternatives like Deis  and Flynn  that offer more advanced features, but they're much more complex and way overkill for my pet projects.
 http://progrium.viewdocs.io/dokku/ http://deis.io/ https://flynn.io/
There's one cool alternative I found: Hetzner Online leases dedicated hardware that their other customers stopped using or ordered but didn't use, etc. It is in form of reverse auction - unused hardware gets cheaper until someone rents it.
You can get i7-3770 server with 2x3 TB disks and 16 GB RAM for 32 EUR per month. That's not much more money than linode, but it is much beefier and just your (but no SSD). If you pay more you can get something like i7-3770 with 32 GB RAM, 2x3 TB disks and 2x240 GB SSD for 65 EUR a month.
The servers are in Germany, which might be interesting for Europeans who are wary of US cloud companies.
I'm not affiliated with them in any way, I just find their offer cool.
I get that people are frustrated with heroku's pricing, but at the end of the day I think that if your side project isn't worth the money you spend on it's hosting, maybe you should look into other side projects? If it's truly not a money-making project, not needing to scale it or have it up 24/7 seems like a reasonable trade-off to not paying for it's hosting.
The number of tools and resources Heroku provides is quite significant, and developing/deploying/maintaining similar solutions is certainly not cheap or easy. Especially at scale.
Disclaimer: I'm one of the Dokku maintainers.
-  http://progrium.viewdocs.io/dokku/ -  http://convox.com/
I'm wondering how AWS compares, though - running a dedicated, manually managed EC2 instance or ElasticBeanstalk. Seems like you could get your own server, and still have little barrier to scaling as high as anybody could ever need at the touch of a button. Anybody have much insight on that?
You can stay on the free tier of Cloudflare and the hobby tier of Heroku and get full encryption on your custom domain (and without worrying about getting certs issued). This is how I run my more important hobby apps.
And regarding the lack of novelty of `git push heroku master` I'd argue that it's still surprisingly valuable even in modern times. Sure you can do it yourself using a dozen different techniques these days, but as a hobbyist I want to be building apps and not maintaining personal infrastructure.
(Disclaimer: I used to work at Heroku.)
However to the comment about "git push heroku master," someone that is an "early" programmer like myself finds that level of simplification helpful when I'm focusing on things like learning all the other things that could go wrong with my Rails Tutorial app.
That said, as I now am at the point of starting to work on projects that I hope to one day be public facing, I'm wondering if there are any recommendations for "Heroku-like" providers with a free tier that stays up 24/7 for super simple Rails and Sinatra apps, perhaps with a DB.
The benefit to me of using something like Heroku is not because I get more CPU/RAM than an equivalently priced Linode instance - the value is in not having to manage a Linode instance at all.
I'd also argue that git-pushing to Heroku is still a lot less friction than dealing with docker images or chef/puppet scripts.
They're finally too expensive for you, or you outgrew them? They aren't cheap, and never have been, especially for hobbyists.
If you have a hobby, you most likely have more time than money to complete your hobby, so why, why use something like Heroku? The very nature of your activity is to spend your time, not your money, Heroku is only giving you the feeling of savings until you outgrow their nearly worthless free offering. Now you have to spend your time, moving, configuring, and learning an "entirely new" system. Had you done this the first time, you'd have saved money/time.
Yeah... "git push" to deploy is easy-- and if you're new to application development it can be nice, but! It also means you have no idea how to configure or run your app (i.e. deploy). It means you're more vulnerable to your vendors quarterly earnings needs (price hikes), it means you're more vulnerable to vendor technology changes (lock-in), it also means if it ever hits the fan-- you're gonna have oodles of downtime while you learn how to do it "right" the first (nth) time.
FWIW, it's actually really easy to run a VPS that's stable once you learn how to do it properly. Don't fear it, mate. I've always found it nice since none of my projects ever get substantial traffic, I can usually run some auxiliary service I need (like Gitlab) on it too. A two for one, eh?
... One last thing, if you're running a business Heroku might be the right choice (it's up to you to evaluate the risk/reward).
 I personally think deploying with Capistrano is a cinch (cap deploy) but it'll take you a lil' bit of effort to set up the first time.
What I was missing: these users have complicated code in their $PS1 or other 'prompt variables', which makes the shell do things every time they change directories.
Indeed, the the title contains the word 'prompt'.
This is apparently a solution for a problem I don't have. (There's no way I'd ever tolerate a slow cd command, async or not!)
This uses zsh-async: https://github.com/mafredri/zsh-async
What i'd like instead, is based on the same issue/idea, which would be a real "linemode" where your prompt is "locally updated" and anything sent to "the shell" is asynchronous.
This would not fork processes in the background (i want to be able to be selective on that obviously! so it would queue up processes if i dont specify '&' for example) but ensure that you can always get back to the prompt.
This also makes it so that the prompt would be instant over the network as well.
Note that some protocols do support linemode, just not SSH, or obviously not the native common shells either, at least, not without building a forked version
There are a couple of good open source options for software-based AFSK APRS modem/TNCs:
Dire Wolf: https://github.com/wb2osz/direwolfminimodem: https://github.com/kamalmostafa/minimodem
You'll still need a USB sound card though, which adds weight and cabling. I would love to see some open-source SBC hardware a la Beaglebone Black or RasPi that had a high-quality sound card built onto the board and wired through a Molex-type plug that was suitable to the bumps and bounces of flying.
 https://github.com/chrissnell/GoBalloon -- This was my first project in Go and while it works well, it now looks pretty ugly to me with a year and a half of Go experience under my belt.
He approaches the problem at a higher level than a typical modem, for instance, he uses predictable data in the packets as framing instead of the deficient framing provided by the bit level protocol. When confronted with a corrupted packet he also tries to guess the true content by doing a "what if" analysis on the marginal bits to see if flipping them the other way fixes it.
We take a different approach to understanding code than the traditional antivirus world. Rather than try to hunt for a needle in a haystack, we've created a system for finding anomalies in code that's already published. For example, you can build a set of signatures for "bad apps" and then repeatedly search for them (AV model) or you can profile what makes an app "good" and then look for clusters of apps that deviate from it (SourceDNA).
Consider an ad SDK like Youmi here. They weren't always scraping this private data from your phone. There are some apps that have this library but that version is a typical, only sorta intrusive, ad network.
But, over time, they began adding in these private API calls and obfuscating them. This change sticks out when you track the history of this code and compare to other libraries. There was more and more usage of dlopen/dlsym with string prep functions beforehand. This is quite different from other libraries, where they stick to more common syscalls.
By looking for anomalies, we can be alerted to new trends, whatever the underlying cause. Then we dig into the code to try to figure out what it means, which is still often the hardest part. Still, being able to test our ideas against this huge collection of indexed apps makes it much easier to figure out what's really going on.
Apple can defend against unauthorized calls to even runtime-composed method names though. I can think of a few ways.
They could move as much "private" functionality as possible outside of Objective-C objects entirely, which requires that you know the C function name and makes it obvious when you've linked to it. This should probably be done for at least the really big things like obtaining a device ID or listing processes.
Even if they stick with Objective-C, they could have an obfuscation process internal to Apple that generates names for private methods. Their own developers could use something stable and sane to refer to the methods but each minor iOS update could aggressively change the names. If the methods are regularly breaking with each release and they're much harder to find in the first place, that may be a sufficient deterrent to other developers.
They could make it so that the methods are not even callable outside of certain framework binaries, or they could examine the call stack to require certain parent APIs. At least that way, if you want to call a private API, you have to somehow trick a public API into doing it for you.
And, I think Apple does say somewhere that developers shouldn't use leading underscores for their own APIs. They could hack NSSelectorFromString(), etc. to refuse to return selectors that match certain Apple-reserved patterns in all circumstances.
> We believe the developers of these apps arent aware of this since the SDK is delivered in binary form, obfuscated, and user info is uploaded to Youmis server, not the apps.
Know your binaries?
It looks very nice. Would you be willing to write about the tech stack you used for this?