Besides what already was said. It is also important to emphasize that Python 2 is already pretty good. So it is not that Python 3 is bad, it is just that it is very hard to improve on 2.
Ironing out the warts is good, but this was not the right time. This should have happened 7-10 years ago.
Nowadays I can't imagine a lot of people discounting Python because of the print statement, unicode support, division rules, or lack of yield from statement.
It will be performance, concurrency, ability to create web back-ends, installing packages, testing frameworks, IDE support.
Apart from allowing optional type annotation syntax I just don't see Python 3 providing a good enough carrot to force many projects to switch to it.
Imagine you go to a manger and tell him. "Oh this 800K line project we have in Python 2 will be ported to Python 3, can we have 1 month to do that?". Ok then the manager might say "Well we have these features to implement but if you all say so. But what will we gain by it, to offset the time spent (opportunity cost) and risk of breakages". And if the answer if "oh you know print is not a statement anymore, and many dictionary and sequence methods are not iterators not returning values, and this new Twisted-like async library...' Well you can imagine many a manager might just say "well that is just not enough".
If in turn the dev team came back said "Oh yeah they integrated PyPy, STM module, requests module. Static type checking via annotations, 3x speed improvement, no more GIL so can do some CPU intensive work if need to.". I can easily see this proverbial manager OK-ing that.
I always thought of Python as being a great utility programming language. It's not really a specialist, more of a jack of all trades.
For example PHP is all about web development. Ruby is probably most well known for Rails and also widely used for web development. Python is widely used for web development, but that's not necessarily the first thing you think of for Python.
What's going to keep any programming language alive is the libraries that become so well entrenched that a competing library would have a serious uphill battle to even come close to matching functionality. Python has a lot of libraries like this for scientific tools.
I'm always skeptical to hear that a developer has moved from X programming language to Go. I wonder how many of these tales are from developers who are actually referring to what they do in their spare time rather than their day jobs. Go is still early enough that doing the sorts of things which create the most jobs is still more painful than it needs to be (and so you would probably be doing these things in a different ecosystem.) It seems that the real Go job generating stories are from start-ups which have hit some momentum, received funding and are rewriting parts of their stack in Go.
The mass job generators are still at the Rails / Django / PHP / JS levels.
It was all fine once I got it worked out but it would so much nicer to provide a requirements.txt file and have pip figure out the ordering and dependency stuff. That and being able to install binary packages in a simple way from pip would make me much happier with python (no more waiting 10 minutes for numpy or whatever to compile).
As far as actual language features go however, I still find python a joy to work in.
edit: I wanted to respond to this myself, since upon rereading I no longer get the impression the proposed changes need 'break' backwards compatibility per se. For the suggestion on removing the GIL specifically, this would completely necessitate a revolution in the design of python programs such that even if, say, the libraries that had already been ported at the time of 3.2 still work in 3.9, their implementation would be senseless by 3.9 conventions.
People are leaving Python for Go because people have always left Python for fast compiled languages. Google ditched Python for C++ and Java. Java! I've seen a lot of projects get re-written in Java from Python, but no-one worried then.
Python 3 adds some cool stuff (async, in particular), and fixes some warts. It's a bit rude of them to force people to upgrade, but that will eventually pay off. It will add more things in the future.
The people who start new projects in Python 3 will have some short-term pain, as some libraries take time to port. There will be a long term benefit, though - future libraries will be better for Python 3, and they won't have to port their project.
The only controversial thing was the use of unicode. IMO, Python 3 made the right choice - you should make everything unicode where-ever it's feasible, because it's just a mess otherwise.
If you think that Python and Go are made for the same tasks then you're really confused.
But I can not imagine the languages, which are all this, to spread so nice and readable from command line scripts to scientific computing to big server applications.
Python's use cases will not go anywhere, so don't panic: Python is doing just fine and improving in many areas while holding on to its core values.
Any evidence? If it's personal experience, then mine is exactly the opposite.
I don't know if such a thing exists, but maybe a big list of the main changes would help convince people more (type annotations, yield from, the forthcoming @ operator). From what I've seen and read, of course all this is somewhere in the docs and release notes, but I've never seen a clean concise list of the main new features, fixes and reasons why these features are cool.
The people I know who use Python, including myself, range from utter beginners to experienced programmers, but are using a relatively small subset of the available libraries, and are just using whichever version we started with. I could translate my code to version 3 in a heartbeat, but have no particular reason to do so. I've translated some of my most important stuff from BASIC to Python after all.
Professional developers will do whatever is right for their projects.
My concern would be for the folks who develop and maintain the libraries -- for whose generosity I'm grateful. If there were some sort of consensus on the direction of Python, I'd hop on the bus just to make life easier for those developers. Their time would be better spent adding useful features or just combing the code for bugs, than coping with multiple Python versions.
Could a single Python interpreter somehow manage a mixture of 2 and 3 code?
I think this will happen eventually, what with some of the recent PEPs; I just wish it could happen faster. Optional typing is the best of both worlds and there is no reason not to have it.
Python shouldn't start a policy or breaking things just because python3 is less successful than hoped.
Python remains the language of choice for introducing programming because it is so simple. It isn't fast, and it might not be very well suited for large-scale, long-term use. That's okay.
This appeal to beginners, which the article claims is waning, is the vital force of Python; as long as it is the de facto language for beginners, it will never go the way of Perl.
I have seen neither evidence that python is "dieing" in any way, nor that people are dropping it because it lacks radical new feature X. Things may be more competitive now but I don't see any stagnation in the community - and that's always been one of python's strongest points.
Well, people start using it now. I'm teaching Python classes exclusively in Python 3, and do all my personal projects in Python 3 and love it.
I would like to finally have a "stable" Python 3 with forward compatibility. This is important for the future of the language, else no-one would invest in it.
I use it every day and even though I love JS (and have been thinking about looking into Go because of all the positive noise) I don't see Python going anywhere for me at least. It is simply too handy and familiar. Maybe it just will be used slightly less by some?
I'd have to see some real stats that Python is dying to believe it.
Plastic pallets are vastly superior in the long run to wooden pallets in terms of durability, and many companies have used them for almost two decades interchangeably with wooden pallets. One of the issues here is that once a pallet enters the supply chain, who knows if or when you'll get them back. When I first started working for the grocery chain, there were over 30 pallets of back inventory sitting in the warehouse (this is a very bad thing and I corrected it during my time with the company).
The main problem with wooden pallets is that they are largely made with inferior wood that can't stand up to the stresses applied to them for a long period of time. If any of the center slats break, there's no problem. If one of the ends break, then you're probably going to be cleaning up a warehouse floor from whatever was on the pallet. People are also more likely to walk away with a wooden pallet since they have plenty of utility and are very inexpensive.
That isn't to say that plastic pallets are without fault, even if they are virtually indestructible. Most plastic pallets are manufactured with a diamond plate pattern. That's nice and all, but it's still plastic and therefore, slippery. Place some frozen or refrigerated food on a pallet destined for a windy area and have fun cleaning bananas out of the back of the truck.
So the best of both worlds would be a plastic pallet with a non-slip coating on the top of the pallet and the feet.
Pepsi, Coke, and other soft drink and beer distribution companies even have their own, smaller, powered pallet jacks. The powered pallet jacks that grocery and other retail stores use are usually too big for these smaller pallets.
These companies have been slowly moving to plastic pallets over the past decade, and wooden pallets of those dimensions are hard to find today. The plastic pallets offer some level of 4-way access as well.
Edit: I had forgotten about these full-size plastic pallets. They are so much cleaner and easier to use. Even new whitewood pallets leave behind a cloud of splinters and wood dust. These leave behind some shipping dust as there are no cracks for dust to fall through. They stack more securely and don't get heavier when wet like wooden pallets. And you can stack about twice as many in the same space.
They are usually made out of higher quality wood and quite durable. The system works by trading pallets for pallets, so if you receive some goods on a EUR pallet the driver takes an empty EUR pallet from your stack and it'll be reused at the other company.
I will admit, though, that CHEP are the best when it comes to quality (other than iGPS, perhaps). They are far less likely to have splinter, warp, and degrade. They are also much more resilient to the abuse that the supply chain puts on them.
An important point to remember is automation. The logistics industry is becoming more and more automated. Consistent, high quality pallets are becoming a must. The typical hi-lo is becoming less and less common, while in its place are robotic automated guided vehicles, distribution conveyors, and high-bay storage and retrieval machines. I know from experience how much pain and frustration is caused by broken stringers, splinters, and warped whitewood (in our industry, we call them GMA pallets [Grocery Manufacturers Association]]. Literally days of lost time annually, which can equate to millions of dollars.
I have no real opinion about what the best direction is. You either fork out more up front for the good stuff (with all of its politics), or you just deal with the bad quality and inconsistencies of whitewood.
I have to say, this was a very interesting article...a nice change from Python 2.7 vs. 3.X!
> What is most vexing to many recyclers is the belief that the accumulation of blue pallets in their yards is not an accident, but a deliberate CHEP strategy. After all, collecting these stray pallets takes a lot of labor, a lot of miles, and a lot of trucks. If you are CHEP, why do this work yourself if you can get someone else to do it for you, at a price that you dictate?
> In 2008, a group of recyclers filed a class action lawsuit, claiming that CHEP was leveraging its dominant market position, and violating anti-trust law, by transferring its operational costs onto recyclers. The recyclers argued that CHEP had made them into a conscripted collection army.
How many hidden industries like the pallets one are out there, waiting for software (or hardware) aid or disruption? I mean, 3.5 billion in pallet-related revenues :) , and millions in losses due to lack of tracking... is RFID really the best solution?
After all, collecting these stray pallets takes a lot of labor, a lot of miles, and a lot of trucks.
they receive blue pallets whether they want them or not
So the recyclers' hands are tied because they receive pallets mixed in with white pallets when delivered by the truckload, but then they get to turn around and talk of all the effort collecting them. It's not hard - just educate your customers - "I won't pay for blue pallets, because they're rented equipment. You can ship them to me, but I'll reduce the payout for a truckload by the number of blues". What the article is promoting is that the recyclers get to play the innocent... then directly profit off it.
Don't get me wrong, I think the idea of renting pallets is stupid, but then again, I'm not making $3.5B/year. But the article seems to gloss over the fact that selling something you do not own is not legal, regardless of how much labour you put into it (otherwise burglary and fencing would be legal). These pallets are clearly marked; it's not like they're hard to confuse.
Also, you can register an account for an occasional email.
PS) We're completely free and open source: https://github.com/metacademy/metacademy-application
To take a random example I'm familiar with, making small modifications outside certain small bounds requires a lot of paperwork and approval. This can be something as simple as adding a tow hook to an airplane known to be good for towing. If you're lucky and the modification has been done before and somebody went through the trouble of getting the modification certified, you can take advantage of the work they've already done, greatly reducing the trouble involved as long as you can get permission from whoever got it certified. If you're doing something totally new (or something other people have done, but nobody got it certified for general use) then you have to file a form describing what you're going to do, get it approved, do the work, get the result inspected....
All this even for small aircraft where you'd be very hard-pressed to use them to kill more than two people (including the pilot) even if for some reason you had a goal of maximizing deaths.
Yet, when handling nuclear waste, apparently people can just randomly decide to completely change an important component used in the process?
Or was the change studied and approved by an engineer, but the problem was missed? The article certainly doesn't make it sound like this happened, but it could be an omission.
From what I understand from local papers it is also believed a piece of salt fell from the ceiling (waste is housed in an old salt mine).
I read it as
"Organic Cat Litter Chief" - "Suspect in Nuclear Waste Accident"
As opposed to
"Organic Cat Litter" - "Chief Suspect in Nuclear Waste Accident"
I kept wondering when the CEO of a Cat Litter company was going to be blamed for something.
It looks like they caught it to me.
That was historically one of the points of highest friction at my consultancy during contract negotiation, because every lawyer had a different idea of how to totally derisk the IP assignment for the client, many of which were not compatible with me signing them and then continuing to run a consultancy or software company. (Hypothetical example: If I'm doing A/B testing for you, I am of course amenable to giving you copyright to code/copy/reports delivered to you, but I'm not going to give you exclusive rights over "all procedures and knowhow used in the production of the deliverables.")
Word to the wise: when you have your lawyer draft your standard contract, ask them "Hey can we have IP assignment happen only after SoW's associated invoices have been paid in full?" That's a valuable lever to have to encourage clients operating in good faith to prioritize getting your invoices paid expeditiously.
At the least, you should look up the work-for-hire laws in your jurisdiction. Or, you know, work with a lawyer to learn your rights.
(edit: copy tweaks)
e.g. If I'm contracted to develop a web app it's an original arrangement of an existing programming language.
I don't get it, though. The business wants the software - they pay to have it made. And then they also want rights to it. I really like how you have to make copyright assignment explicit in the contract, because in general it is ridiculous to write code and then lock it behind a vault door and treat it like liquid gold when other people could benefit from it.
I've never heard of Mahescandra, but Cochrane is the guy the famous Cochrane gambit in the Petroff is named after, where White sacrifices a piece on move 4 (1.e4 e5 2.Nf3 Nf6 3.Nxe5 d6 4.Nxf7).
So what I would be interested to see from your data set is a relation between opening performance and its popularity. Did people stop playing the Pirc due to sub-par results compared to other openings at the time (like I imagine happened with the Vienna) or did it simply fall out of fashion? It would be interesting to know which lines always had good results, but just stopped being popular for whatever reason. They could be due for a revival.
So basically, my opening move would be classed as 'other' but really it is one of 1.d4, 1.e4, 1.c4 in terms of the classifications of this post.
(I'm sure in the general sense games with a very large n aren't as I suppose a game could be played in perpetuity)
I don't know how much this is actually true though. That would also be a good thing to look over
Followed Graeme's instructions and after about 30 mins spent building the latest Mono I was quickly able to get ASP.vNext up and running on OSX.
Found one small mistake in the instructions, the switch --feed should be --source in the kpm restore step.
Can anyone imagine a compelling business case for this? I am not used to corporations being overly generous.
And no comments allowed, either! Changing formats to be hip is fun!
So even if you don't agree with my politics, maybe you'll agree with my geology. Let's not build a vast, distributed global network only to put everything in one place!
That slide hits close to home. I'm painfully aware of how hard (and almost pointless/powerless) it is to reason about long-term geological risks, esp compared to less catastrophic and more (short-term) predictable hazards like hurricanes, tornadoes or blizzards; but from time to time I idly question the wisdom, from a civilizational point of view, of having so many concentrated, incredibly talented people living directly atop one of the most dangerous fault regions on earth.
But again, it's pointless to think about it as an individual, so better get back to work and keep living day by day, I guess. Wovon man nicht sprechen kann...
I have come to believe that businesses should not be legally allowed to store any consumer data unless it's obvious to the consumer that it's absolutely required for the primary function of the service, and they should only be allowed to store data for that one function, with an exception if the consumer explicitly and voluntarily opts-in for each additional function.
Large internet companies have been collecting swathes of data with the claim that they are secretly using it to improve people's lives. But it seems to me that A/B testing has failed to improve anyone's life.
Example:I use search engines to search for something I'm looking for.
I do not benefit from being shown 'targeted' ads, nor from the search engine identifying the most populist answers which it uses to spoon-feed me later rather than serve what I asked for, nor from the search engine identifying which particular arrangement of pixels will leave me personally more addicted.
Businesses are welcome to use my data in ways which are in my interest, but they should not get to decide which of these uses are in my interest.
I thought it worth noting that Google does strip personal identifiers after 18 months which is in line with one of his proposed fixes.
"Investor storytime only works if you can argue that advertising in the future is going to be effective and lucrative in ways it just isn't today. If the investors stop believing this, the money will dry up."
I laughed very hard on this!
In all, an excellent article. I disagree with blind faith in technology to solve all our problems and not create new ones
People often forget that technology is tools (and not always neutral tools, as is another naive belief: some inventions have larger inherent "harm potential"), and that policy matters as much, or even more, as does the kind of cultural landscape we guide our use of the tools.
(Remember the classic xkcd comic: http://xkcd.com/538/ ).
It's also a little bit funny that it was written by the guy behind pinboard.in, a nice social bookmarking service (where many people went when del.icio.us died). But that makes me trust the service more, not less.
Which probably means I am stupid.
Although I disagree with the idea of regulating how long behavioral data is saved. Not all behavioral data is sensitive. Rather we should consider fully disclosing to users either how long their data will be saved or what data has been collected on them or both. Any other regulations may be too burdensome to the startup.
=== Examples ===
His suggestion that all behavioral data be deleted after a certain period of time means every little piece of data collected must also have a timestamp. Inflating databases and costing money.
A program must be written that seeks out timestamped data ready to expire and delete it.
If the deleted data is connected with other pieces of data or reports elsewhere we're going to run into complex problems.
These obligations must be handed down from company to company during acquisitions. A company selling data about to expire will get acquired for a lot less than a company with fresh data. This may in turn cause a series of unforseen consequences in the acquisition market.
=== Solution ===
Rather than controlling and manipulating what can and cannot be done, it may be best to just create transparent policies and let the free market converse its way towards a compromise.
Liked the of use of kilometers over miles
"There was an ad for the new Pixies album. This was the one ad that was well targeted; I love the Pixies. I got the torrent right away. "
Kalid Azad's oft-posted article:http://betterexplained.com/articles/a-visual-intuitive-guide...
Tristn Needham's book Visual Complex Analysis http://usf.usfca.edu/vca//
Contrary to Moghadam's comments, the diary is not particularly well-written. It's long, repetitive, weirdly detailed (the author recounts meals eaten years ago), and studded with evidence of psychopathy.
RG's style of annotation works extremely well for some kinds of writing --- song lyrics, The Great Gatsby, TS Eliot poems. What I think those things have in common is that they're hospitable to "riffing" and cross-linking; for instance, the lyrics to the ICP song where they come out of the closet as religious are totally incongruous until RG annotations inform you that they're reprised lyrics from previous ICP songs.
But riffing on Rodgers diary doesn't serve the same purpose, at least so close to the event. It is instead a minefield; almost anything you can say risks diminishing the tragedy, or misapprehending how the mind of a deeply mentally ill person functions, or, god help us, using the output of that mind as a platform on which to build suggestions on changing our culture.
There may be some point at which RG annotations will add value to this terribly sad artifact of Elliot Rodgers, but it probably won't be in 2014.
As an aside, it has always amazed me how many people over the years have failed to realize Rap Genius' gimmick is just that, a gimmick. It's their attempt at using an admittedly off-color flavor of comedy to build their brand. Said brand is heavily rooted in rap, which is one of the most politically incorrect and offensive mediums of pop culture in existence today.
It stands to reason that when the Rap Genius founders are in character, their behavior should not be taken literally as a reflection of who they really are as people.
A good example of this is when they were featured on stage at TechCrunch Disrupt 2013:
Most people simply took them literally, were offended, and jumped on the revulsion bandwagon. Others understood that the RG guys were essentially mocking the startup scene and the rap scene at the same time, in effect making fun of themselves.
In Mahbod's case specifically, it seemed like he was aiming for humor that went right up to the line but didn't cross it. Unfortunately, comedy is a hit-or-miss endeavor and some of the misses were bound to cross that line. Add to that his medical issues potentially adversely affecting his judgement, and it's no wonder.
Was what he said inappropriate? Absolutely.
Should he have been fired for it? Debatable.
Should we assume he's a terrible human being (as some other comments have implied)? Certainly not.
Mahbod compliments several of Rodger's sentences as "artful" and/or "beautifully written". That is okay, if ill-timed. One can make a stylistic statement about Mein Kampf without endorsing its message.
He also, however, speculates that Rodger's "sister is smokin' hot." That is violently inappropriate, particularly given the misogynistic nature of Elliot's crimes.
Electric bicycles are a good, existing market (especially in Europe), but there are already some good options there and I suspect their prefer something where their R&D is more directly applicable (bicycles are heavier and human power can take a bigger proportion of the propulsion load). OTOH, it might be possible to engineer an add-on solution for existing bikes that would work well.
Electric scooters seems like a good bet. The form factor is similar, and I'd imagine many of the technologies can be directly transplanted. I think a folding electric scooter could be more practical than a longboard for many people.
There are of course more exotic options. The self-balancing electric unicycle (which was on HN the other day) was interesting, and I'd hope there are other things waiting to be invented.
Anyway - I'm very excited about the increased diversity in transport options.
I'm not sure what Boosted is bringing to the table honestly, compared to brands like [Evolve](http://evolveskateboardsusa.com/) that have been out for a while now and have better specs at a significantly lower price.
Outside of that, awesome job on the product. Looks great.
Famo.us is making the same mistake for some reason. Aggressively targeting mobile, with half of the toolchain missing to actually shipping mobile apps.
Not that I can blame the OP, who is likely not a SF startup with millions in funding.
Please just have one site, make it efficient, and be done with it.
I always feel totally alienated by the mobile page, there is information left out, annoying badly-implemented JS scrollers, etc...
The "desktop" site looks always more nice and familiar, and you can zoom and scroll around as you wish to read everything.
Sadly, that's rarely the case.
A site like Forbes gets paid every time you click through to another article. Once you've landed on an article and you're reading it, your value to them has been expended. Thus they have almost a diametrically-opposed interest to yours- you want to read the content uninterrupted, and they want you to click another link.
Yes, "fixed bars" are annoying. We've had "fixed bars" on non-mobile for as long as I remember: Main Menu Bar, Title Bar, ooh and now a "Tab Bar" which is just the modern version of Windows MDI, maybe Windows MDI, Navigation bar(s) which sometimes takes up a quarter of the screen height I kid you not, the Bookmarks Bar which gets hidden the first time I open the browser and all corporate links get deleted what a pain.
Including your "bar" in the contents and letting it scroll away might be easiest, and the author noted medium's clever idea.
I built headroom.js  to handle exactly this. It simply adds classes at scroll up or down so you can be as fancy as you want (or not!) with the show hide effect. You can set a custom offset (eg. Don't invoke the hide/show mechanism until 100px down the page), you can set a tolerance (eg. Must have scrolled more than 10px before hide/show) and a few other features for more advanced usage.
And for fun I built a little playground so you can explore the various features and find a configuration you like 
(meta: I submitted it here, but it never gained traction, someone else submitted it to designer news and it absolutely blew up, can't believe it almost has 4000 stars!)
I highly recommend to refrain from using position:fixed on mobile devices.
Do any mobile browsers have a "return to top of page" function? My keyboard has a "Home" key, my phone does not.
An interesting way to solve the issue is to hide the bar when scrolling down, and show it when scrolling up.
I typically stick to reading around the top of my device, and occasionally I want to re-read something I just read. Instead of just getting to re-read the hidden lines, I have to continue scrolling while stupid chrome or a fixed bar appears, and then finally lets me scroll the content.
It's probably the case that a lot of people love this, but I hate it. If I want to see the browser chrome or navigational elements, I'm happy to tap the top of the window to scroll me there. I don't want the browser trying to figure out what I want to do based purely on scrolling.
The only real issue is that mobile browsers don't allow extensions of any kind (at least the ones I've used don't). So there is no real way to add such customizations to mobile browsers, and we're left hoping they go mainstream enough that someone either creates a browser around that feature (ie useragent switch, which is kind of annoying to use because it means copying the url and switching apps), or a dev in a mainstream browser makes it their weekend project. Neither one of these options is ideal, in the first you're left with a bunch of browsers that do one thing, in the latter you're going to end up with a feature that will slowly stop working as the dev's main work builds and his manager tells him to drop it.
I don't want a page to analyze every of my behaviors and try to predict what I want to do. Sometimes I just like to scroll up and down to look for stuff, and I don't want some bar flashing in and out.
No. When I scroll up, that's what I usually want to do: scroll up, see some content that is currently out of view. If that bar appears first, I have to swipe an inch more, which doesn't sound that horrible, but it results in an inconsistency between mental model (swipe down 1 inch, see what is 1 inch above) and technological reality (sorry, you need to scroll 2 inches!).
In the eBay Android app, where I want to quickly compare search results, this annoys the hell out of me.
One of the best things about touch interfaces is the natural mapping between mental model and technology. Let's not break this.
In fact, the iOS behavior is rather more nuanced:
* Scrolling down hides chrome * Swiping up quickly reveals chrome * Scrolling up slowly does not reveal chrome * Scrolling to the top of the page reveals chrome * Over-scrolling past the bottom reveals chrome
I tend to read tweets from oldest to newest, and the "Home/Discover/Activity" bar always gets on my way. Moreover, I don't even need the top blue bar. Just gimme the content!
The HN guidelines ask you not to rewrite titles. Especially please do not rewrite them to make them more controversial.
And why typing DEL is properly a forward delete operation: it eliminates the character under the cursor converting it to a DEL, to be ignored when reading the tape and (like any other punch) advances to the next. (I blame the VT220 for mucking this up and leading to endless confusion.)
I have dreams of creating a modern microcomputer. A computer for the hacker masses, inexpensive with modest specifications and simple design, with ample parallel and serial IO. Something that puts you close to the metal with few distractions and limited complexity, like an arduino but interactive and self-contained. Like what the Raspberry Pi was meant to be, but without binary blobs and complicated operating systems.
Is that an idea that appeals to anyone else? Whenever I think about it, I feel all warm and fuzzy inside. I know some of it is nostalgia, but I also think there is a lot to be said for the creativity and inspiration that arises from working in simple constrained systems.
Am I right?
He donated it to the San Jose tech museum I think.
So, as a very simple example, you don't have a Comment node with attributes for the person who wrote the comment and the article the comment is associated with. You just have edges pointing back to those things. Nowhere in the comment, or even in the edges, is there anything that looks like an ID or foreign key.
Unlike a document DB, however, you don't have weirdness once you have something like co-authorship. Just point to both authors, no need to duplicate the data or set up some kind of pseudo foreign key. Once you get the hang of it, it's a really elegant way to store data.
Still, "Relational databases are fine things, even for large data sets, up to the point where you have to join. And in every relational database use case that weve seen, theres always a join and in extreme cases, when an ORM has written and hidden particularly poor SQL, many indiscriminate joins."
It seems like the overall argument is for (what I see as) a step backward from the declarative model to a lower level imperative model. "You never know what memory your implicit declarations will allocate, better do everything in explicit c-like loops as your data expands."
It's almost like an argument for a return to the world of "hardware is expensive, people are cheap" and for all I know that's what's happening with really big data. But it seems a bit sad to present it as a step forward.
I there any information about this?
I wanted to build a system with tagged content and thought about using a graph-db. (Soft-)realtime querys etc.
I had a few times to use both to restore some files and in one case my MBP HD that went bad... Through CrashPlan I was able to restore everything (it took almost 12 hours to re download everything) but at the end to my extreme surprise I lost less than 5 min of work since CrashPlan was quite up to date.
I do use of course Dropbox but I don't consider it as a backup destination and rather a sync'ing service. I do have for a few months now a FileTransporter from Connected Data and start to use it more and more since I can store up to 1TB and no monthly fee for it, but not yet done the full jump from Dropbox.
I will be curious to hear anyone else solution.
Making sure that your backups actually can be restored is also extremely important; there's not much worse than thinking that you have backups, but when you need them, find that they've become corrupted and unusable.
It's comprehensive, covers off-line bare metal backups (which aren't exactly changing any time soon), points you at tools like rdiff-backup which you can use to get reasonably close to continuous data protection (I do it every hour), etc. etc. Along with a few good and short war stories. And preps you for the big times, if you're interested.
You do need barriers: issue a write barrier before giving a copy of the write pointer to the reader. Issue a read barrier before reading the data.
Avoid cache line thrashing in a mostly full case by never allowing the circular buffer to become completely full (leave at least one cache line free).
The reader thread can quickly poll many circular buffers for work. The pointers will all reside in the reader's cache until someone has written some data (and updates the reader's copy of the write pointer).
You can get more benefits by delaying the update of the other thread's pointers: on the writer's side, until we have no more data available to write. On the reader's side, until we have read all the data (or some larger chunk of it). This allows the cache-line prefetching hardware to work (to prefetch likely used data).
Anyway, if you really want to use a linked list, at the very least allocate multiple items per cache line, and then link the cache lines together (so one link pointer or less per cache line).
C11 has a new header stdatomic.h which only allows atomic for integer types.
How do you perform CAS on a struct in C( without using a mutex of course )?
Relevant to me as I'm in the rough-in phase of a whole-home remodel project. I struggled a lot with what level of control to use, before settling on a minimal Lutron system for some areas of the house, comfortable in the knowledge that if I get tired of the Lutron interface, I can create my own & poke it over the network. I understand most people aren't interested in this level of control, but they might be concerned when their interface becomes dated in 5 years and can't be replaced due to an incompatibility with the hard-wired infrastructure. I see this as a downside of closed systems like Control4.
Another interesting side effect of this will be how it effects Apple's 'coolness'. The only people willing to spend the money on home automation tech are those who own homes or who don't plan on moving any time soon - i.e. not young people.
My garden however is a complex mess that could benefit from automation.
Yes, I know some home automation devices have the server built into every device (every plug, every camera, every light is a standalone "thing" in the Internet of Things), but this adds significantly to the cost. On the other hand, with some good software, it makes controlling everything without a central hub possible.
However, there is another side to this, and that is monitoring.
Home monitoring technology opens up a much larger group of interested consumers for this.
My point is we shouldn't just be talking about automation here. Automation is only part of the story. Monitoring may be a much bigger part.
I hate to do this, because HN strongly prefers original sources. Of course, people sometimes post a Google search url that one can click through to read the OP. But we can't make that the official URL for the post.
If anyone has a suggestion to solve this problem, please let us know.
and here are others:https://news.google.com/news/rtc?ncl=dFL-vXhTfGbu2BMocXtQJZ9...
Firstly, they cannot (or will not) make their devices multi-user, which is a problem in the home environment (eg an Apple TV that's tied to only my account - and already exposes more info than I'd like).
Secondly, Apple has a habit of abandoning things and shutting them down when it doesn't suit them. This is mostly ok for some services but would be disastrous for me as a user if I've put bought into the system.
Finally, Apple doesn't play nice with others. Perhaps they've learned to be better since they've had to interface more due to the App Store but for the most part I don't see them caring about an ecosystem other than their own.
Generate a uuid for each item on your list upon creation ( on the client side), then sync the actions on the item ( create, update, delete). There's never the need to "merge" on the server. All you need is to create a serial "history" of all the actions on the server side, and replay them in that order on clients. You do that by locking on the DB when a client wants to send new actions.
PS : i'm currently coding exactly that. So it's more a way for me to share my ideas rather than criticize.
Couchbase Lite (http://developer.couchbase.com/mobile/develop/guides/couchba...) does exactly this - except it's not an actual Couch instance, but rather an SQLite database, and a client that speaks the necessary Couch protocols for sync/replication. Not so insane, and (as far as I've seen so far..besides git!) the only tool that really handles sync well across platforms.
edit: Ah, CBL (and Datomic) are mentioned in part 3 of the article. All is well :)
Data is modeled with objects. The object store works offline and online. If you sign up for the service, then you get syncing.
EnduroSync also has a very nice permission model, enabling sharing of object stores in a variety of ways (per user, per app, ...).
Presumably the secret key used to generate the HMAC never leaves the YubiKey? So when you want to change the seed, you need to ask the YubiKey to sign the new seed? So saving the database requires pushing the button on the YubiKey again?
AWS Status Update: 5:51 PM PDT We are investigating connectivity issues for EC2 instances and impaired EBS volumes in a single Availability Zone in the US-WEST-1 Region.
1. The Linux binaries on the new site are a tar.gz.gz, making `tar xf filename` fail, and besides that inconvenience, is probably a packaging error. You might want to fix that.
2. The OpenFL provided Haxe installer is broken, because it tried to download http://haxe.org/file/haxe-3.1.3-linux64.tar.gz, which is gone. Please either fix the OpenFL installer or make sure download links are backwards compatible by adding a redirect. (I'm assuming both projects are run by the same community)
I worked around problem #2 by modifying the installer script to use old.haxe.org, since the binaries on the new site are harder to use because of problem #1. But this could definitely dissuade newcomers to Haxe, despite all of its qualities (which, in my opinion, are many!)
The website itself is developed in Haxe, and all of the content is hosted on Github so we can encourage contribution but still keep an eye on the quality, unlike the wiki we had previously. There's a "Contribute" link at the bottom of each page that links you to the relevant file on Github.
If you have any questions let me know, I'm happy to answer. I hope you find it a valuable resource.
What's a good and up-to-date Haxe tutorial for making mobile apps, targeted at Mac developers?
EDIT: I think that impression came from reading this  article, possibly found here... or possibly some other article... anyway it talked extensively about AS3 and made comparisons against Haxe.
1 - http://www.grantmathews.com/43
I like the end goal, but am confused why I haven't heard of it until today.
The real interesting statistic is page load times. If load time is remaining static whilst page sizes are increasing (mostly due to images it seems), then there isn't anything to worry about. Businesses are just keeping their websites within a certain performance envelope and are responding to greater bandwidth as it comes about. (Not to mention JS speed increases)
If on the other hand, websites are growing fatter and page load is getting slower, then there is something counter-intuitive happening. But I doubt it.
> The only content type that experienced significant shrinkage was other.
> So ruling out third-party content leaves us speculating that either the shrinkage is due to a decrease in use of video (quite possible) or an undocumented change in the testing process (somewhat possible).
There is some interesting info in the article, but the headline isn't it.
I was surprised to see that at least as far as the HTML is concerned, there hasn't been much change in the past 15 years.
Never knew this thing existed, but seems like a really nice resource.
From the ground the moon and the planets seem so far away, so ethereal. Yet just knowing that we have sent people and probes to touch them underlines how obtainable the impossible can be with grit and science, per ardua ad astra. It gives me hope.
I wish that the spirit of international cooperation that has grown around this endeavour will endure over the coming decades and beyond.
Also, this is the first time that I have taken a close look at the surface of the shuttle, and as @dewey noted, I too found it to be very uneven and blocky. From a distance, they look reasonably smooth.
But as someone else here mentioned, you really want to understand the reflog in git. Everyone screws up at some point or another, and it's much easier to work through things when you can rely on the reflog to act as a safety net and an anchor of sanity.
The git fixup page comes in quite handy. https://sethrobertson.github.io/GitFixUm/fixup.html
function jk history | head -n+10 | tail -r | gitjk_cmd end