At 4 GB, I'd just as soon query this locally, but this looks like a fun exercise.
I notice that there were 10,729 distinct ASINs out of 15,583 Amazon links in 8,399,417 comments. Since I don't generally (ever?) post Amazon links, I'd be interested in expanding on this in two ways.
First, I'd reduce/eliminate the weight of repeated links to the same book by the same commenter.
Second, I'd search for references to the linked books that aren't Amazon links. Someone links to Code Complete? Add it to the list. In a second pass, increment its count every time you see "Code Complete," whether it's in a link or not.
It is not the best when it comes to explaining things in an intuitive manner. It is a great reference book with lots of algorithms and proofs.
In recent years I have been drawn more towards Levitin's "Introduction to the Design and Analysis of Algorithms".
Anyone else have similar feelings about "Introduction to Algorithms"?
https://twitter.com/mattyglesias/status/689169613779808257"The only book ranking that matters"
Is this a result of the author spamming his own work?
Edit: Looks like it, short skimming of "darwin's theorem site:news.ycombinator.com" shows that all links are from user tjradcliffe, who is the author. A case for manual curation of data.
I wrote this curated site from HN several years ago. Got tired of people continuously asking for book recommendations. http://www.hn-books.com/
Couple points of note. This is 1) an example of a static site, 2) terrible UI, 3) contains live searches to comments on each book from all the major hacking sites, and 4) able to record a list of books that you can then share as a link, like so (which was my reason for making the site)
"My favorite programming books? Here they are: http://www.hn-books.com#B0=138&B1=15&B2=118&B3=20&B4=16&B5=1... "
I started writing reviews each month on the books, but because they were all awesome books, I got tired of so many superlatives!
Thanks for the site.
It's the most polarized I've ever seen in my life.
- SICP: Structure and Interpretation of Computer Programs
- CTM: Concepts, Techniques, and Models of Computer Programming
- TAOP: The Art of Prolog
On https://reddit.com/r/bigquery, /u/omicron_n2 left queries to repeat the experiment on HN and on reddit comments too:
And a presentation by /u/Pentium10 on the same topic, using the books that redditors read:
I admire the effort. Calling it Top Books is slightly misleading. Perhaps you can call it, most mentioned books.
Always interesting to read. But just as interesting is how quickly they pop to the top of the home page.
Here's one on understanding the mindset of your investors when raising startup capital - Startup Wealth - http://amzn.to/1Jej8El
The Four Steps to the Epiphany: Successful Strategies for Products that Win Author: Steven Gary Blank Publisher: Cafepress.com Number of links: 45
Not where I live. What to do about it? Move. Find an employer willing to let you work remotely, and find your own quiet cost-conscious piece of paradise.
Can we get the top 100 books as well? (since many of those would have very similar mention numbers as the end of the top-30)
Thanks for the list though. Bought the psychology one.
For VLC and all related VideoLAN projects, we're moving to our own instance of GitLab hosted on our infrastructure.
And to be honest, it's quite good, but a few stuffs are ridiculously limited, to the point that some people in the community are resisting the change.
The first part is the groups and subgroups: it seems incredibly difficult to give sub-groups access to repos (like a team for iOS, one for Android, one for libVLC... but all are under the "videolan/" group). It seems there is a way with the EE, but not in the CE; and the current idea for the CE is to have sub-projects, which is not good, because it will make our URLs way more complex than needed.
The second part is the bugtracker/issues tracker. We use trac for VLC, and we want to leave it for something better; but gitlab issues is way too limited, even when using the templates. Especially, it seems to be impossible to add custom searchable fields (like "platforms", "priority" or "modules") which are very very useful to do queries. Also, there is no way to do custom queries and store them ("I want all the bugs for Windows, which are related to the interface modules").
If I remember correctly, this second part was also a complaint in the open letter to github.
Finally, it's not really related, since it's more a feature request, but we'd love to allow external people to fork our repos, but not create completely new ones (or have them validated) because we don't want to host any projects under the sun (there is github and gitlab for that). So far, you either allow both features or none of them.
PS: can we have custom landing pages and custom logo in the CE version? :D :D
So now it even feels like they're doing Git hosting the right way, making the core software open source, and charging for enterprise features.
On the other hand, I would have probably never paid for GitHub if they followed this model. So I don't think GitHub would have been as successful.
What's great about GitLab, there's a release on the 22nd of each month, so you can depend on pretty much continual improvement. Even if you don't think GitLab is suitable for your Open Source project, talk to the team on their issue tracker, things get solved pretty quickly!
Custom templates: https://secure.phabricator.com/book/phabricator/article/form...
It's better in almost every aspect than GitLab and GitHub.
See https://en.wikipedia.org/wiki/Phabricator for an (incomplete) list of open source projects using it.
I'm a Gitlab users for a few years now, personally I like it much more than Github, one of the reason is that I fear that Github contains too many projects and gains too much control over OSS, I also dislike their CoS.
Good luck Gitlab!
One issue that was raised several times was the ability to not create merge commits. In GitLab you can, as an alternative to the merge commits, use fast-forward merges or have merge requests be automatically rebased.
The main thing keeping me from actually doing it is the network effect... and this:
Right now GitLab.com is really slow and frequently down. This is because of fast growth in 2015.
GitLab still has a ways to go in terms of performance/reliability and polishing their product, but GitHub aught to be very nervous about them.
The only question left is if your servers are powerful enough to run gitlab. Maybe I'll sacrifice a goat for some new server hardware and 256GB of ram.
In general, I liked it, but it always irked me that its Ruby underpinnings made it hard to upgrade/migrate stuff (we basically just swapped LXC containers at one point, not sure how it was handled during the last upgrade). If anyone ever manages to do a credible alternative that does _not_ use Ruby in any way but keeps the overall GitHub-like workflow, a lot of operations folks will switch _instantly_.
(Like https://try.gogs.io/explore, for instance)
Also, like some commenters already pointed out, the CE edition was ridiculously limited in some regards - we mostly skipped the bits we didn't like and did product-level ticketing outside it (using Trac), with Gitlab issues used only for "techie" stuff, tracking fixes, etc.
But today I'd probably just sign us all up for GitHub and be done with it, or fire up a VM image from some marketplace - there's hardly any point in maintaining our own infrastructure or doing a lot of customization.
That being said, what both GitHub and GitLab are missing is actually becoming a "social network" or maybe more an active network. There are tons of interesting projects that pops up every day, that I would be interested in knowing about, contributing, but there's basically no way to learn about them.
Kudos to the GitLab team for all its work :)
There is an opportunity for Gitlab here and I'm happy that they decided to make this announcement.
The community is the actual winner of this healthy competition.
Once their performance increases, maybe we'll see the momentum shift from Github.
I have never used GitLab myself, but some of the features mentioned in the article (like a true voting system) is something I've really longed for. Might have to reconsider trying out GitLab more.
A suggestion I have is to consider open-sourcing and re-branding EE on a GPL-Like license that also requires projects hosted with it to be open sourced, while specifying that contributions to this version can be re-licensed by GitLab to be used by paying customers (Or re-licensed to MIT and released on CE). This way open source people get it all, and if you want to use it on closed source you pay GitLab.
This also has the benefit of allowing open source developers to work with GPL, and the changing to MIT could even be decided before merging (Sending to either CE on MIT or EE on GPL+GitLabProprietary, according to developer decision).
There's a lot to fix on the OpenSource world, but there are also so many possible ways. Best of luck to the GitLab team, I hope we see more amazing stuff from you (Btw, GitLab CI got a lot better recently, I hope you keep improving it. :D)
If you want the talent you need, especially in the Bay Area, you have to pay more than what the average developer makes in Amsterdam. I want to like GitLab, but I just can't get that bad taste out of my mouth.
Github OTOH has an extremely usable mobile UI.
What I really don't get is the argument that "we won't liberate feature XYZ from EE because it's only useful for companies with 100+ developers". I think it's quite impressive that you can know what every user of your free software needs, and that you'll protect them from code only suited for enterprise.
I'll still use GitLab (the fact there's a free software version is great), and I'll be the first to fork (or back someone else's fork) CE as soon as you get acquired and your free software is no longer maintained by you (see: Oracle with Solaris, and every other acquahire ever).
I've implemented two self-hosted Gitlab instances at work, for one of the instances on our private network, I'm still fighting with IT to allow us to use LDAP. Gitlab EE is still off our 'pockets' as management aren't too keen to pay for it, at least yet, but I hope that we'll get there.
Our self-hosted instance is also a bit slow, not as slow as Gitlab.com, and if it was written in a language that I'm familiar with, perhaps I and some of my team could contribute to 'making it faster'. Pity I don't have enough time left in the day to learn Ruby. I've read up a bit on the work going on around Unicorn and workers, but maybe some of these things could be better written in other more 'performant' languages?
For personal projects I still use Bitbucket + JIRA. I got to the point where I decided to stop looking for freebies and pay. JIRA has been awesome, totally worth the price.
1. more than one level of subdivision for groups/projects (see below) 2. groups of users (call it department/team): because of 1 we have a lot of groups, (because all the small libs are in separate projects, so the project itself is a group, but of course we have several projects), so everytime somebody join the company, we have to add him in every single project (for them to be able to read the code), and we have also subcontractors, we would like to have a nice way to separate from the others
There seems to be a lot of hating on GitHub here, but I personally love GitHub (and we use GitLab at my current employer).
I think GitLab is doing a great thing, and I appreciate that their community edition is free and open source, but GitHub has been able to provide an invaluable service. They have a great community that facilitates open source projects and a vastly better UI than GitLab (though that isn't saying much with how awful GitLab's UI is).
I'm eager to see how GitHub evolves in the future with GitLab as a competitor, as GitLab has a lot of nice features (built-in CI, etc).
How does GitLab compare to Phabricator?
These are pretty essential.
By the way, we're using self-hosted Gitlab at work and we love it. This isn't a knock against the actual product. In fact, I think Gitlab has improved tremendously in the last 18 months. I just wish they would be a little more up-front about their marketing efforts.
I love gitlab (even made a git tool to easily create repositories from the commandline, gitgitlab) but these small things make a real difference. I'll end up paying for a github organization account just to get this annoyance out of the way.
I think the spammer is trying to make a point! For starters, there seems to be no rate limit applied.
I use a command line for everything else in life; but with Git I'm hopeless.
But if I press "sign in", I am able to sign up, with no notice about it being just a limited (45 day) trial. So so far I'm assuming that this is a perpetually free account, though I'm not completely sure yet...
Who gets to decide that features are enterprise only ? How are these enterprise only features: "Hosting static pages straight from GitLab", "git-annex", "git hooks", etc. ?
Get a crippled version that doesn't fit the reasonable expectations or pay for enterprise edition coming with a big bag of features I have no use for just to get the missing features that I can have on github for free (And I'm not a big fan of github)?
As such gitlab community is not very useful to me and does not seem to have a future because its chosen business model goes against its usefulness to people.
Is this a joke? I mean for people looking for free private Git hosting, there is Bitbucket. This statement is like saying "free, but not really, really." The fact is if I want hosted Git hosting from Gitlab, I cannot reliably get it without paying at least $390 upfront for their EE plan. Too much smokescreen, too less actually on offer.
EDIT: After reading the comments in this discussion, many of which are addressed in the NYT article, I'd say the NYT article is almost certainly worth reading.
In searching for Tyche, the WISE missions ruled out the possibility of anything larger than Saturn (95x the mass of Earth) out to about 10000 AU and anything larger than Jupiter (317x the mass of Earth) out to about 26000 AU. WISE was able to detect objects the size of Neptune (17x the mass of Earth) out to about 700 AU, so it should be possible to find the object proposed by the Caltech astronomers here (10x the mass of Earth at around 600 AU). I don't know if WISE's current condition would allow it to perform such a search, as it's completely out of coolant.
This theory definitely looks more promising. Finding eccentric Kuiper belt objects, and aligning them with a missing object seems to be a good bet. Giving the object an orbit should make the search easier, and we will probably have a conclusion one way or another within a few years.
However, to be completely pedantic: would this actually be a planet? Or still a dwarf planet, despite its massive size? Keep in mind that the definition of planethood is not only that it's large enough to be rounded by its own gravity, but that it has also "cleared its orbit". I get the impression that this would cut through broad swathes of the still-cluttered Kuiper belt, and thus would only qualify as a "dwarf" despite its massive size.
I checked the original papers for references to whether it had cleared its orbit, and couldn't find any. Correct me if I'm wrong?
This raised a big red flag in my mind. This must produce a literally astronomical multiple comparisons problem. Yes they reported sigma = 3.8, but if they didn't do their multiple comparisons correction right (which I am in no position to determine), they're basically reading tea leaves.
If you're not familiar with multiple comparisons, it's kind of like [this](https://www.goodreads.com/quotes/649893-you-know-the-most-am...) or [this](https://xkcd.com/882/). If you look at enough extra-neptunian bodies, some of them are going to be in an odd looking cluster.
Also because X would be 10th discovered planet, a reference that pluto, while not a planet today, was indeed the ninth _discovered_ planet.
He believed this hypothetical planet of Nibiru to be in an elongated, elliptical orbit in the Earth's own Solar System, asserting that Sumerian mythology reflects this.
It reminds me about a book I read called "In Search of Planet Vulcan" (http://www.amazon.com/In-Search-Planet-Vulcan-Clockwork/dp/0...). Before Einstein, astronomers tried to explain the motion of Mercury by suggesting there might be another planet inside Mercury's orbit.
That's 10x the mass of the Earth, right, or about 3x the size of Neptune?
Though part of me wants to say "Pictures or it didn't happen!"
The fact that the material in that region is so spread out and the orbital period of such object is so long matters.
I would love to read some thoughts on that.
Mike Brown, the co-author of the paper reported here, discovered Eris, a KBO like Pluto, in 2005. This discovery prompted the IAU in 2006 to demote Pluto out of the realm of "planet" into a "dwarf planet".
At the time, Alan Stern's New Horizons mission to Pluto had just been launched, it finally arrived last year. Stern was incensed that NH started out as a visitor to the 9th planet and was going to end up as a visitor to one of many KBOs, and not even the largest one (Eris is more massive).
The quotes given at the time (http://www.space.com/2791-pluto-demoted-longer-planet-highly...) are revealing:
"Pluto is dead." -- Mike Brown
"This definition stinks, for technical reasons...It's a farce." -- Alan Stern
For more: http://www.space.com/12709-pluto-dwarf-planet-decision-5-yea...
Stern is visiting Pasadena on a New Horizons victory lap next week. Should be interesting.
It's not a company. It's not a product. You're not being asked to buy it or buy into it, just to discuss the concept if you'd like.
This website is a portfolio piece for a 21-year-old university student hoping to find an internship. In my opinion, it's an impressive demonstration of his design and technical skills. It certainly says a lot more than the average 21-year-old's resume listing what courses they've taken so far.
Why is everyone out for the blood of the desktop/windows paradigm? Why can't the simplified-tablet market and the desktop-power-user markets coexist? Windows and taskbars and start menus are wonderful and my favorite way of interacting with computers, why must it be taken away?
For example, "Panels use screen space more efficiently and are a more elegant way to multitask than normal windows."
Says who? You? It just drives me crazy when I see statements like these. It's not an effective way to get your point across. You want to make your case for something like that? Actually make your case. Present some evidence and your conclusions. Not everything has to be a Jony Ive marketing video.
edit: Last thing I'll say - I just re-watched the video, and caught the last line - "it rethinks desktop computing to help you get work done". My advice to the author - go work for various companies for 5-10 years, and then come back and see if that statement holds up. My ability to get work done would be crippled with this; in fact, my work would come to a grinding halt. "Work" just doesn't work in the kinds of idealistic ways these types of marketing-like videos always seem to show.
And don't get me wrong, I like the author's ambition. If I was in the internship-givin' business, hell, I'd probably consider him. I think this is a good way to get your name out there, even if it attracts criticism (like mine).
Right now, switching between projects means setting all that up every single time.
While feeling the shortcomings of a window system, I never felt like I could productively work with a tiling manager, since I frequently overlap windows as a mechanism for easy access ... and there are lots of popup windows I don't want to cover the main application.
Oh yes, and I purposefully keep my Mac apps out of Full screen mode (and use a keyboard shortcut to maximize when needed, keeping out of full screen).
To me, an exquisitely blind-accessible GUI's should function like a fancy keyboard navigable/editable graph data structure that echoes the hierarchies and relationships represented on the sighted displays. The biggest boon to such a GUI would be that the blind-accessible controls would function as an "expert" mode of navigating the GUI - one would never have to touch the mouse or trackpad to get stuff done.
Sighted users would benefit from learning these keyboard controls and we'd inject some hyper-productivity back into our apps to counter the fisher-price'ification that has been creeping into GUI's over the past 10 years.
I have seen plenty of consultants hired, but nearly every startup I've seen or been at/around (~20) prototyped their designs in house. As an example bu.mp hired an industrial design firm to help them make physical prototypes of a POS competitor to NFC technology (which ultimately failed), but had their own engineers and designers actually create and test the working prototypes.
Shortcomings notwithstanding, I think this is an example of excellent work and a creative way to find an internship. Given the opportunity, I'd most certainly offer this guy an internship if I could.
Nice job. Ich wnsche dir viel Glck.
"We now use smartphones and tablets most of the time, since they are much easier to use."
No. Just no. I don't want to use a touchscreen and closed ecosystem to develop software. That would be a nightmare.
The rest of the design seems to be taken straight from 10/GUI.
I'd like to try something like this in action; but the problem, as always, is going to be bootstrapping. Look how badly Ubuntu manages something as simple as putting the application's menu bar in a non-standard place.
...I worked once with a desktop environment for the PC, GEOS. It had a feature where your application's UI was described in logical terms and this was then mapped to a physical UI when the app loaded. It allowed pluggable look-and-feels to drastically modify the look and behaviour of the application as they saw fit.
If we had something like that, this would be easy. Shame we don't, really.
I found that Neo really resonated with my residential use, but not so much for my work.
From the Author's referenced blog post 'The Desktop is Outdated': "We interact with a lot of different content today, and a large part is outside of files". Not in my work environment where the majority of content is inside of files. But sitting at home - yeah, this is true for me.
It appears to me that the mobile interface cart is trying to drive the productivity desktop horse here. I don't know to what extent I buy it, but I like the way Neo challenges current desktop design.
I'm going to go back and read it again.
A few thoughts there:
1. What happens when my mental schemas change in a year? The word I use to look something up changes, and I suddenly can no longer find it.
2. What happens when I get a little lazy in obediently tagging everything I create? Imagining the 2 or 3 words I'll want to use in the future to look something up (see the first thought) is really tough. Mentally taxing = a barrier to adoption.
3. Folders can get unnecessarily deep, stale, etc... but having the structure available to browse can be an extremely useful trigger in reestablishing the hallways of my desktop-stored "mind palace."
4. Having many ways to discover information I'm looking for > having a few ways. Search, browse, categorize, all have a purpose depending on the way a file or piece of information imprinted on my memory.
With respect to eye tracking, I had a similar idea the other day. Imagine holding a key, then moving your gaze to see an on-screen target following where you're looking at. You could use this as a really quick way to scroll or highlight/copy text without leaving your home row.
I really don't like leaving my home row.
I may be crazy.
For instance, If you can track focus, you can make the monitor seem larger than it really is. Just scale everything that isn't being looked at down a bit. As the gaze shifts towards other objects, shift things slowly around and overlap the enlarged window over other background windows.
You can make focus-dependent shortcuts. Imagine vim with nouns and motions that can refer to and act on the focus point. `ytF`: yank-to-focus, where F is a motion from cursor to focus. Lots of rich possibilities there.
The only major problem I see is that you can't really share work easily (unless you open it up to multiple focus points somehow.) Also, it would be frustrating to have to look where you're typing. I'll often look at something else while typing just before switching tasks. I'm going to start paying attention to my focus more to see if there are other potential pitfalls with the computer knowing about it and changing modes in response.
In general I like the possibilities opened up by having focus. Heck even with a traditional mouse+keyboard, the extra data would help the computer understand us better. It might be more suited to a VR desktop, where the 'multiple-viewers' problem does not exist.
I have to say, I really like tagging-as-filesystem concept (where you can also meta-tag something), and the gaze/touch interaction proposed would be awesome to have in my opinion.
The current interaction with tagging (basing off OS X) is still pretty clunky. It's a separate field, the new vs previous tag selection is aggravating with a keyboard, and there's no easy way to browse or select multiple tags to filter content.
Having to move to and from the mouse a lot is a bit of a pain, and even with the best touchpads on the market, the interaction with them can still be annoying when trying to do things like click on a HN upvote arrow. Being able to start the mouse from the point you're looking at, or even forgot the mouse entirely when starting typing into the field you're looking at - those would be fantastic additions.
I'm slightly more dubious about the voice interaction, though there are times where "Hey Siri, set a timer for 10 minutes" is a great way to interact with an otherwise over-complicated device.
This is really important. It's 2016, search is easy, and I shouldn't have to hunt around because I don't know whether you stuck your options dialog under File, Edit, or Tools. I appreciate Ubuntu for implementing this OS-wide with Unity's HUD feature.
Neo was designed to inspire and provoke discussions about the future of productive computing. It is not going to be a real working operating system interface, it is just a concept. I am not saying that these ideas would definitely work and that this is the future of computing. However, there is large potential in rethinking the core interfaces of desktop computing for modern needs, and somebody has to try."
Okay, buddy. Tell me how you're going to navigate a complex project built with multiple apps with your cool new tag-only scheme.
How would you re-organize the myriad number of source code files, library sources, images, image source files, readmes, and other things? Hierarchical structures are good for that kind of thing. If you're going to advocate throwing them out then I think it is imperative for you to tell me how you will organize real projects rather than just a handwave about search and adding tags to tags. How would I reorganize the directory full of 193 Illustrator art files and a ton of subdirectories, many with multiple layers of their own subdirectories, that make up the graphic novel I finished last year? It's got a total of ~2.7k files in it but all of that complexity is hidden behind a bunch of subdirectories so I can quickly find what I need at any moment.
(And also: holy crap those sample images are so much WHITE, using this will be like staring into a spotlight. And so impersonal - the user-set desktop picture is a wonderful thing that makes the computer feel like it's theirs and I kinda feel like this proposal completely drops affordances like that in favor of a blown-up iPad UI.)
Taking a look at the gestures...
1) Scroll through panels - Alt+TAB is unquestionably faster (< 1sec)
2) Open App Control - Win key? (< 1sec)
3) Open Apps menu - Keyboard Shortcuts (< 1sec)
4) Open Finder - Win + E (< 1sec)
5) Close Panel - Alt F4 ((< 1sec)
6,7,8) Resize - Win + UP/DOWN/LEFT/RIGHT (< 1sec)
Now, sure I see some people claiming that "nobody" knows these shortcuts or that they are "not intuitive" (itself a loaded term), etc.
Proposal 1 (Teach people these shortcuts)
Proposal 2 (Invent completely new gestures - teach people these new gestures, which will then slow them down because a keyboard is just crazy fast compared to touch.)
Now, I'm going off of the whole Desktop phrasing. Maybe on a mobile, all of these might make more sense..
Think about a music program you want to write. You want to find all artists that music files on this computer have and when the user clicks on an artist find all the music that belongs to this artist.
he will see how the organization in panels and not windows was called "con10uum" (min 3:30). I wonder if some Microsoft designer/VP/exec saw this and get the idea to put "Continuum" as one of the most important features of Windows 10.
Besides from that, great work, congratulations @ziburski. Like many others I miss shortcuts on a productivity environment, but this really could work.
One general insight about GUIs is that easily undone mistakes aren't so bad. That's the Amazon one-click purchase insight. The innovation is not that you can buy with one click. It's that you can cancel for the next 10 minutes or so. Most shopping systems had a "we have your money now, muahahaha!" attitude on cancellation until Amazon came along.
 http://hci.stanford.edu/research/GUIDe/ http://www.cs.tufts.edu/~jacob/papers/barfield.pdf
Windows + Mouse with hotkeys is still the most comfortable set up for most people including me. I don't need an optimized experience.
I've tried Metro, I've tried i3, I've tried bspwm, I use Vim with split screen, and I still prefer the concept of draggable windows at the end of the day. It feels more free to be able to drag windows and take ownership of the layout than having a computer dictate what my layout should be.
At the end of the day, I feel like I'm more and more preferring simple UI's rather than these multitouch optimized experiences
But he doesn't explain HOW he selected the text. Since there's no mouse cursor, I have no idea how he just did that. A glance doesn't seem to be sufficient since it requires movement and intent.
My 2:- 6 fingers on the trackpad means two hands on the trackpad, which means moving one of them out of its working position. Just like you are avoiding moving the mouse around with the look-and-tap thing, you want to avoid users getting their hands off the keyboard, onto the trackpad, and back to the keyboard;- the context menu is really nice.
It is a very impressive piece of work for someone who is looking to show off his skills while still in college. I made projects myself before graduating, but they were always amateur-y and only for myself. I never made it to actually share it with the world. He actually did something and put it out there for others to see and judge.
Kudos to the guy, specially if the whole design process which he claims to have gone through (research, target groups, user flows) is as thorough as it sounds.
I completely disagree with that.If you gave two people (one on a pc, the other on a tablet or phone) some task, ie. move a file from here to here, assuming they were both equally conversant in their chosen system, the pc user would be able to do it in a fraction of the time of the tablet or phone user.
The first is the consideration of running this on current hardware. Obviously Neo is meant to be run on hardware designed for these interactions (as shown with the voice key on the keyboard and the gaze tracking camera) but I'd like more detail on how this could run on systems that do not have this hardware.
The second is in professional applications. I did not see any screenshots that deal with Photoshop or Final Cut Pro or After Effects or Solidworks or really any of the crucially important applications for the desktop. These are exactly what keep the desktop around, and what keep people from accepting Metro.
It'd be great if we could solve these two issues, or at least discuss them. I think these ideas are really great and the presentation was amazing. I'm seriously impressed, but it's a little held back from widespread adoption until we can figure a few things out.
P.S. Also I'd like to be able to split my terminal windows horizontally please :)
AKA the "goatse" gesture.
What does this even mean? ALL my data is in files and folders. Is there a filesystem that doesn't use the concept of files and directories? If not, then isn't it best to model the system closely in the UI?
I also read the "Window management is outdated" and it completely failed to detail how windows are bad. It seems to me that windows are the most flexible UI paradigm, allowing you to decide how exactly you want to use your screen space. The challenge is on the devs to make apps with a reactive UI that changes according to the size of the window.
- swiping to switch between apps (with transition animation) is a no-go, cmd-tab is much better as it is right now- with taking two-finger-swipe for switching apps, you need something to go back and forward in your browser. 3-finger-swipe?
Not sure regarding tags instead of folders. Folders are simpler and easier to understand, so this needs a lot of details review and experience investigation.
And here I am, with my desktop computer, comfy keyboard & screens, fully hackable. I've had it for almost 12 years now, and I just replaced parts every now and then to keep it somewhat current. That will never go away. Then I got an old laptop for working away from home, but I hardly ever use it.
So when I hear touch screens will become ubiquitous and its necessary minimalistic UX will drive the consumer computer industry, I have my doubts.
Never gonna happen.
Something being useful does not mean it will necessarily eat everyone's lunch. Nor that is has to.
I would much prefer all of the things I do to be organised by the work I'm doing. Here is an example.
If I switch to the "personal project" I get a completely clean workspace (or whatever I left it as) with email filtered to be about only things related to my personal project. I want only the apps that I use for this project (browser, pycharm, photoshop, terminal) all on different screens all of these apps only showing the work and paths I have associated with a specific project.
Basically build GTD into all my apps and the desktop and allow me to filter said desktop by tags, projects, people etc.
And I think I'll take this opportunity to turn on no procrast again...
There are some very good ideas in here, notably the pop-up circular 'swipe' menu of contextual actions (near the bottom of the page). I built one of these into a trading system once and it was well received. Gaze tracking combined with "Just Type" strikes me as being potentially very powerful too, if combined with some sort of highlight thing so you know where you're going to be typing.
There are some amusingly weird things too, like the polydactyl-only requirement to "click and drag inwards on a panel with 6 fingers."
While browsing this I found it to be inspiring and beautiful. Way better than anything some companies that call themselves "design" put out for BIG $$$.
Screw all the negative energy here. Go man, go. You are the real deal among all the fake critics-designers.
- Tiling window manager - that I have (StumpWM), and it's awesome! (pun intended)
- Convenient tagging of everything - now that's been discussed for many years already, and yet for some reason nobody actually implemented it properly.
- Eye tracking - it's a feature I wanted to have since I started using multiple monitors and noticed the unnecessary input action I have to make to switch focus so that it follows what I'm looking at.
It also has one of my favourite UI patterns - wheel menu.
A great demo, IMO, and I wish someone would implement it. I'd happily try it out, and if good enough, I'd become a paying user.
I wonder how natural touch on a touchpad is for a productivity flow in a desktop context. I personally never managed to get it integrated in my workflow (neither with magic mouse or trackpads).I also would be interested in understanding the impact of a mixed input experience. Not sure if our brain can switch context so fast. On the other hand, I see the synergy in having eye sight + voice control.
I get that this is meant to be a thought provoking concept piece and I respect that. My issue here is with the font color choices on some of the paragraphs. It is set to have an alpha of 0.4! Why would you do that? Why make things less legible?
I'm no designer so I've made some poor contrast choices and I can forgive those, but why would you make the text more see through? Is this a design pattern?
Surely if technology is good enough to track the eye, we don't need a touchpad to track hand gestures?
I really like this concept, but when I heard "three finger scroll" I died a little inside.
Can't these Apple hipsters just be happy with the iDevices and leave my WM alone?
But please don't fork/refactor Gnome again to become this. Not yet.
Lennart's selecting from among the best widespread ideas, adding in several that are currently fringe, and coming up with a few twists of his own. The result is several things I'd very much like to have right now.
Any company looking hard at the desktop would be idiotic not to make him an offer. Lennart would be best advised to pick very carefully from among those he's tendered, and to make clear his own expectations.
The losers will be chasing his taillights for the next 20 years. Possibly 40.
The vision here is strong, but practical. It's dispensing of numerous tired elements of existing design, but replacing them with what appear to be far more workable models. I see some weaknesses and holes in the presentation, but at this stage these are details, not as is so often the case baked-in architectural flaws.
The synthesis of tiling, desktop, and touch interfaces in particularly is quite promising.
My biggest concern is that Lennart would accept an offer where his ideas would be absorbed into the black hole of an ossified company.
The obvious contenders for him are Apple, Google, and Microsoft. Ubuntu and Mozilla might be in the offing, and dark horses might emerge from Amazon or one of the existing hardware vendors -- Asus, Samsung, or LG, perhaps.
I think Apple, Microsoft, and Amazon would all be mistakes. The first two are far too wedded to their legacy platforms. Amazon is simply a horrible place to work, and the first rule is to not work for assholes.
Google, Ubuntu, or Mozilla offer the opportunity to develop this project and maintain an open-source offering. They'd probably be at the top of my list. I've suggested Google talk to Lennart and make him an offer.
One of the major hardware vendors might be of interest. I haven't kept track of where KDE/Qt are headed, but that's another option.
What I'm most thinking is, "damn, this would be a great time for a credible challenger for the desktop to appear". I'm not sure there is one.
But the world is Lennart's oyster.
I've commented further on the interface, and on what I see as weaknesses based on the presentation:
Conceptually, I'm really liking the approach. I could see it being a viable refreshing of desktop paradigms. There is an interesting mix of OS X, Windows, Linux WMs, and some other goodies from other apps here.
Visually, the biggest drawback is I could not tell while watching the video which panel has focus. I am assuming the idea is this would be handled by tracking eye focus. Perhaps I could get used to that, but it'd have to be instantaneous switching. As I'm typing this in a half-width browser window that takes up all vertical space, I have the Desktop Neo site in another half-width browser window taking up full vertical space by its side. I'm bouncing my eyes back and forth between the site and this textbox I'm typing in. I'm currently staring at the Neo site while I'm typing, without any looking back. Such a desktop paradigm would have to remain very intelligent about recognizing that I'm currently typing in a panel while looking at, and perhaps scrolling through another panel, without wanting my current action to lose focus or be interrupted in any way. I work this way all the time.
edit: this is nothing more than a design prototype.
I don't adblock for privacy, security, or speed. Those are just nice-side effects. I adblock because I do not want to be manipulated into buying things I do not need.
I wonder what would happen if, as a society, we said, "enough, no more ads". Would it really be the capitalist apocalypse that the ad industry is trying to make us believe it would be?
Why on earth would users want this browser?
Thinking selfishly, I would much prefer the status quo, where I can block most ads, but the majority of consumers don't do it. Current ad blocking tech is fine, I'm afraid this could become an arms race.
Our difference to Brave is that we give free ads to everyone, the advertiser only pays if the end user makes a purchase. Similiarly the display site gets nothing if there was no economic exchange. Capitalism is supposed to be a machine for you getting what you want. We want to help that process along. I have an uncompromising attitude that web/world ads should be for things that you really want to see, and then they become content.
That might be a utopian vision today, but I have strong belief in the power of people's self interest to drive positive change.
Edit: chrispm reposted the link here, https://news.ycombinator.com/item?id=10940684
1) Desktop browser is an electron app with ad tracking injected into your app via http://cdn.brave.com/ via https://sonobi.com/welcome/index.php which promises "EFFECTIVELY PLAN AND SOURCE MARKETING OPPORTUNITIES WITH QUALITY AND VIEWABILITY FROM PREMIUM PUBLISHERS"
2) iOS browser is a fork of Firefox iOS - https://github.com/mozilla/firefox-ios
3) Android browser is a fork of https://play.google.com/store/apps/details?id=com.linkbubble...
I don't mind ads in print magazines so much (other than the fact that print magazines are unlikely to write negative stuff about companies that advertise with them). Ads in print magazines are ok with me, because there's no movement on them. So I can easily read one page, even though the next page has a full page ad.
They mention standard sized spaces and faster browsing. I actually wouldn't mind large ads - like something taking up my whole screen - that I can scroll through. Back in the 90s, it probably made sense to have small 468x60 pixel banner ads, but as fast Internet connections are becoming more and more common, I don't really see the point of restricting the size anymore. Large full page ads aren't really a problem in print magazines, and I don't think it would be on the web either, if we just got rid of the animations.
From the Project Xanadu Wikipedia article:"9. Every document can contain a royalty mechanism at any desired degree of granularity to ensure payment on any portion accessed, including virtual copies ("transclusions") of all or part of the document."
Ted's approach is (in my view) also a deduplication effort, as you're citing the original content, tracing it back to its origin by reference.
How do they plan on doing that? Not like it hasn't been tried before. The problem is you can't collect money on someone's behalf without them opting in, and if it is opt-in only you get the chicken and the egg problem for adoption.
(1) If they block tracking, does it block Google Analytics? Because that would annoy me as a website owner.
(2) The reason I don't pay subscriptions to sites like Wall Street Journal and NY Times is that I get my content from aggregators like Hacker News so I only go to one of those paid sites if I follow an occasional link. Micropayments would fix that if I could pay one company a $5/mo subscription to then have payments automatically dolled out to a select list of good sites until my $5 was used up (then maybe ask me each time after that, or something).
(3) They talk about avoiding the ad-blocking war, but they are just contributing to it. I guess what they think is that by making a way for the website owner to get paid they avoid some of the war, but many companies like to be in direct control of their money so they might not like a middleman sitting on the high way charging everyone a tax to pass. And if Brave doesn't charge something for its services then it has no business model, so I'm assuming they are not passing 100% of revenue on to the site owner.
They've received substantial investor money, so apparently they have something lucrative in mind. And it's probably not good for privacy-conscious end users.
One of things he mentions on one of the sites, is Adsense looking at every scroll event, and doing tracking work which takes 25ms on a smartphone (his smartphone, likely to be high end). That means your scrolling performance is going to be inherently bad, probably below 30 fps one you take into account other work associated with the browser or the site. Having a browser which takes out this kind of code, but doesn't break the business model of the website owner does seem like an interesting idea. It seems like a major part of the mobile web is half-broken for these kind of reasons.
Maybe enterprise or businesses will like it - so they can avoid their employees visit whitelisted sites that mistakenly have malicious code in the ads. Eg. Flash
"Firefox for iOS"
They forgot to remove the branding from their "new" browser.
They want to block ads that the person running a web site put on their web site with their own (Brave Ads Infused TM Ltd. Inc. - let's make some money while pretending we are freeing the world).
For example, cbs/abc/nbc seem to detect muBlock and then stop serving content.
How should the user agent decide when to alert the user?
Now, time will tell how things will play out, but I believe I can count on Brendan to make the right choices when it comes to features, compromises.
Wow, new browser technology that is open source, this is good news. I'm hoping the development focus is flexible, remember flock? 
I dislike ads, but there are already solutions for blocking them. Although I do like the premise of this, I'm not eager to switch browsers just to start supporting advertisers.
On the other hand, if this gets traction (unlikely, admittedly) this may finally force the issue to the courts and get content fiddling declared copyright/TOS violation. Which I'm not sure you all want.
"Then we put clean ads back". This is open source, right? It's on Github. Can someone fork this and remove all the ads? Thank you.
This however does not tackle the mindset shift that needs to occur for the masses to start protecting the private information they voluntarily give up on services they are signed in on the social net.
We are currently working on a project that will use this information to the marketer's advantage in a way that will make people sick once they realize the extent of the profiling going on, with the ultimate goal of reversing the trend before it's too late. Make people raise their guards, sell some tech on the way.
That's not a realistic claim. Nothing is stopping publishers and advertisers from sharing back end data.
And can anyone find this 'roadmap' that Eich talks about in the post?
To be fair, Firefox for iOS is open source. Take it, remix it, improve it. It is all good. Mozilla Public License.
How Do you want to finance development in the lang run?
It is a nice solution and I'd hate to see it go because of financial problems.
'Brave browser promises faster Web by banishing intrusive ads' | Jan 20, 2016 http://www.cnet.com/news/ex-mozilla-ceo-try-braves-new-brows...
> Eich and his team built Brave out of Chromium, which is the foundation for Google's Chrome browser, which leaves most of the actual development and security support to Google. Why not use Firefox, into which Eich poured so much effort? Because Chrome is more widely used and therefore better tested by developers who want to make sure their websites work properly, he said. "Chromium is the safe bet for us," he said.
* The desktop browser is a cross-platform desktop application created with a fork of Github's Electron framework that is itself based on Node.js and Chromium. https://github.com/brave/electronhttps://github.com/brave/browser-laptop
* The iOS browser is a fork of Firefox for iOS, which is a Swift app developed from scratch by Mozilla. https://github.com/brave/browser-ios
* The Android browser is Link Bubble, which is a wrapper around the default Android browser https://github.com/brave/browser-android Previous HN discussion here: https://news.ycombinator.com/item?id=7453897 Australian developer Chris Lacy announced its sale in Aug 2015: http://theblerg.net/post/2015/08/05/ive-sold-link-bubble-tap...
* The ad blocking technology is courtesy a Node.js module of Adblock Plus filter that uses a bloom filter and Rabin-Karp algorithm for speed.https://github.com/bbondy/abp-filter-parser-cpp
* The database is MongoDB. https://github.com/brave/vault
Past news coverage:
Mystery startup from ex-Mozilla CEO aims to go where tech titans won't | Nov 17, 2015 http://www.cnet.com/news/mystery-startup-from-ex-mozilla-ceo...
Use Link Bubble to open links in the background on Android | Aug 26, 2015 http://www.cnet.com/how-to/use-link-bubble-to-open-links-in-...
People would actually stop to read the ads because they were interesting and relevant.
Then google caved to images and animation and 100+ objects on a page, each with their own tracking scripts to slow browsers to a crawl.
But I can't do that on my phone without jailbreaking it. Stupid phone.
This isn't that far removed from coming into a bakery and saying "The Cupcakes are no longer $2, they're $1.50 'coz that's what we think people want to pay."
I realize the idea is that this is "better" for the content providers than Ad Block, but both are, IMHO stupid. If a site you visit has ads you don't like, complain to the people who run the site and stop going to it. All Ad Block software has never been a fix, merely a tool in an ever escalating war of ads where users and content creators both lose.
Indeed it is a very loathsome business model.
People have taken exception to it when ATT and Comcast inject ads into your browsing experience and when Adblock Plus removes and then reinjects them.
Why is this not hijacking the web, extorting publishers with buy into yet another ad network and then trying to leverage this into a future payment network?
Could such thing be secure?
"The new Brave browser blocks all the greed and ugliness on the Web that slows you down and invades your privacy. Then we put clean ads back."
A browser built with Electron that exposes Node.js and otherwise keeps away from the HTML5 kitchen sink, in order to push innovation away from the spec committees and back out to the community. Vital technology like TCP, UDP, DNS, and the filesystem is being locked up behind a fascade of poorly implemented APIs.
A browser with a small, efficient core, optimized for rendering, and with a brilliant app install system, and brilliant native cross-platform integration. The time is ripe.
1. My Time Machine backup (primary backup)
2. BackBlaze (secondary, offsite backup)
3. Amazon Glacier (tertiary, Amazon Ireland region)
I only store stuff that I can't afford to miss on Glacier: photos, family videos and some important documents. Glacier isn't my backup, it's the backup of my backup of my backup: it's my end-of-the-world-scenario backup. When my physical harddrive fails AND my backblaze account is compromised for some reason, only then will I need to retrieve files from Glacier. I chose the Ireland region so my most important files aren't even on the same physical contintent.
When things get so dire that I need to retrieve stuff from Glacier, I'd be happy to pony up 150 dollars. For the rest of it, the 90 cents a month fee is just a cheap insurance.
Google Nearline is a much better option IMO. Seconds of retrieval time and still the same low price, and much easier to calculate your costs when looking into large downloads.
First of all, I just woke up (its morning here in Helsinki) and found a nice email from Amazon letting me know that they had refunded the retrieval cost to my account. They also acknowledged the need to clarify the charges on their product pages.
This obviously makes me happy, but I would caution against taking this as a signal that Amazon will bail you out in case you mess up like I did. It continues to be up to us to fully understand the products and associated liabilities we sign up for.
I didn't request a refund because I frankly didn't think I had a case. The only angle I considered pursuing was the boto bug. Even though it didn't increase my bill, it stopped me from getting my files quickly. And getting them quickly was what I was paying the huge premium for.
That said, here are some comments on specific issues raised in this thread:
- Using Arq or S3's lifecycle policies would have made a huge difference in my retrieval experience. Unfortunately for me, those options didn't exist when I first uploaded the archives, and switching to them would have involved the same sort of retrieval process I described in the post.
- During my investigation and even my visits to the AWS console, I saw plenty of tools and options for limiting retrieval rates and costs. The problem was that since my mental model had the maximum cost at less than a dollar, I didn't pay attention. I imagined that the tools were there for people with terabytes or petabytes of archives, not for me with just 60GB.
- I continue to believe that starting at $0.011 per gigabyte is not a honest way of describing the data retrieval costs of Glacier, especially when the actual cost is detailed, of all things, as an answer to a FAQ question. I hammer on this point because I don't think other AWS products have this problem.
- I obviously don't think it's against the law here in Finland to migrate content off your legally bought CDs and then throw the CDs out. Selling the originals, or even giving them away to friend, might have been a different story. But as pointed out in the thread, your mileage will vary.
- I am a very happy AWS customer, and my business will continue to spend tens of thousands a year on AWS services. That goes to something boulos said in the thread: "I think the reality is that most cloud customers are approximately consumers". You'd hope my due diligence is better on the business side of things, as a 185X mistake there would easily bankrupt the whole company. But the consumer me and the business owner me are, at the end, the same person.
its even less suited to disaster recovery (unless you have insurance)
Think about it. For a primary backup, you need speed and easy of retrieval. Local media is best suited to that. Unless you have a internet pipe big enough for your dataset (at a very minimum 100meg per terabyte.)
4/8hour time for recovery is pretty poor for small company, so you'll need something quicker for primary backup.
Then we get into the realms of disaster recovery. However getting your data out is neither fast nor cheap. at ~$2000 per terabyte for just retrieval, plus the inherent lack of speed, its really not compelling.
Previous $work had two tape robots. one was 2.5 pb, the other 7(ish). They cost about $200-400k each. Yes they were reasonably slow at random access, but once you got the tapes you wanted (about 15 minutes for all 24 drives) you could stream data in or out as 2400 megabytes a second.
Yes there is the cost of power and cooling, but its fairly cold, and unless you are on full tilt.
We had a reciprocal arrangement where we hosted another company's robot in exchange for hosting ours. we then had DWDM fibre to get a 40 gig link between the two server rooms
The idea would be that the data would either never be restored or you could compel someone else to foot the bill or using cost sharing as a negotiation lever. (Oh, you want all of our email for the last 10 years? Sure, you pick up the $X retrieval and processing costs)
Few if any individuals have any business using the service. Nerds should use standard object storage or something like rsync.net. Normal people should use Backblaze/etc and be done with it.
Yes, the docs are imperfect (and were likely worse back in the day). And it was compounded by the bug, apparently. But it's what everyone on HN has learned in one way or another... RTFM.
Was it mentioned in the article that the retrieval pricing is spread over four hours, and you can request partial chunks of a file? Heck, you can retrieve always all your data from Glacier for free if you're willing to wait long enough.
And if it's a LOT of data, you can even pay and they'll ship it on a hardware storage device (Amazon Snowball).
Anyone can screw up, I'm sure we all have done, goodness knows I have. But at the very least, pay attention to the pricing section, especially if it links to an FAQ.
You pay a lower per-kilowatt-hour rate, but your demand rate for the entire month is based on the highest 15-minute average in the entire month, then applied to the entire month.
You can easily double or triple your electric bill with only 15 minutes of full-power usage.
I once got a demand bill from the power company that indicated a load that was 3 times the capacity of my circuit (1800 amps on a 600 amp service). It took me several days to get through to a representative that understood why that was not possible.
Has anyone tried this or know of a gotcha that would exclude this?
And I realize that for the OP's situation, it wouldn't have mattered since he thought he was going to get charged a fraction of this.
These days the infrequent access storage method is probably better for most people. It is about 50% more than Glacier (but still 40% of normal S3 cost) but is a lot closer in pricing structure to standard S3.
Only use glacier if you spend a lot of time working out your numbers and are really sure your use case won't change.
 - 5 cents per 1000 requests adds with with a lot of little files.
That's something that generally keeps me from using AWS and many other cloud services in many cases: the inability to enforce cost limits. For private/side project use I can live with losing performance/uptime due to a cost breaker kicking in. I can't live with accidentally generating massive bills without knowingly raising a limit.
My only experience of using boto was not good. Between point versions they would move the API all over the place, and being amazon some requests take ages to complete.
After that worked with google APIs which were a better, but still not what I'd describe as fantastic (hopefully things are better over last 2 years).
Does s/he substantiate this claim in any way? AFAIK glacier's precise functioning is a trade secret and has never been publicly confirmed.
As noted by others here, if you treat glacier as a restore-of-absolute-last-resort, you'll have a happier time of it.
Perhaps I'm being churlish, but I railed at a few things in this article:
If you're concerned about music quality / longevity / (future) portability - why convert your audio collection AAC?
Assuming ~650MB per CD, and the 150 CD's quoted, and ~50% reduction using FLAC, I get just shy of 50GB total storage requirements -- compared to the 63GB 'apple lossless' quoted. (Again, why the appeal of proprietary formats for long term storage and future re-encoding?)
I know 2012 was an awfully long time ago, but were external mag disks really that onerous back then, in terms of price and management of redundant copies? How was the OP's other critical data being stored (presumably not on glacier). F.e. my photo collection has been larger than 60GB since way before 2012.
Why not just keep the box of CD's in the garage / under the bed / in the attic? SPOF, understood. But world+dog is ditching their physical CD's, so replacements are now easy and inexpensive to re-acquire.
If you can't tell the difference between high-quality audio and originals now - why would you think your hearing is going to improve over the next decade such that you can discern a difference?
And if you're going to buy a service, why forego exploring and understanding the costs of using same?
I'm really doubting the need for a maintenance regimen on a drive which is almost entirely unused. Could have spent $50 on a magnetic-disk-drive and saved yourself hours worth of trouble.
I currently have 100gb of photos on Glacier. I am going to be finding another hosting provider now.
I ended up using some cheap VPS, two of them located in two different countries. And it's still cheaper then say Dropbox.
I'm surprised that this aspect has not been mentioned here in the comments yet:
> I was initiating the same 150 retrievals, over and over again, in the same order.
This was the actual problem that resulted in the large cost.
At my old job we would get a lot of complaints about overage charges based on usage to our paid API. It wasn't as complicated of pricing as a lot of AWS services, just x req / month and $0.0x per req after that, but every billing cycle someone would complain that we overcharged them. We would then look through our logs to confirm they had indeed made the requests and provide the client with these logs.
Here in New Zealand, we have many native species of birds, insects, frogs, lizards and the like that thrived when our islands were cut off from the rest of the planet, but that have become extinct, or are in imminent danger of being so due to introduced predators such as rats, stoats, hedgehogs, ferrets, cats etc. etc.
It leads to the bizarre situation that conservation here is largely about killing things.
 http://www.radiolab.org/story/brink/ https://en.wikipedia.org/wiki/Judas_goat
There is also a book out now on the insects .
So there are bigger insects around, and they can fly?
/I was half expecting the article to outline how a box full of 80s technology had been replaced with a Raspberry Pi!
I'll be honest, I had no idea all of this goes into radio. I thought there were just, err, towers?, that broadcast... radio waves, and that was kind of it? I mean, I knew on some level it had to be a tad bit more complex than that, but you never see or hear about it.
As part of making the unit, we had to ensure it complied with all the relevant European directives so that it could be CE marked. We had the Codec tested for both safety and EMC.
Interesting, I thought this wasn't required for internal use; probably the BBC corporate structure means it no longer counts as "internal".
On "Radio Teleswitch":
[Droitwich] transmitter uses a pair of obsolete metre-long valves which are no longer manufactured anywhere in the world.
In October 2011, the BBC admitted that the Droitwich transmitter, including Radio 4's longwave service and Radio Teleswitch, will cease to operate when one of the last two valves breaks, and no effort would be made to manufacture more nor to install a replacement longwave transmitter.
This is like the Domesday Book (1980s interactive multimedia system on Laserdisc): high-tech and sui generis at the time it was built, but eventually uniquely obsolete.
The BBC itself is very much a valve-and-laserdisc organisation in a Google world. We huddle round the warm glow, concerned that eventually some vital element will give out due to lack of money and the whole enterprise will give up.
Periodically someone comes up with the idea of shutting down analog FM radio. This is politically inconceivable as half the country has ancient radios that were tuned to Radio 4 in 1967 and have never played any other station (an exaggeration, but only a small one).
I'm surprised BBC is not only willing to make custom hardware, but custom RTL design as well.
I'm curious what the failure rate of the FPGA will be, I mean they are more susceptible to soft errors than CPUs or ASICs. Maybe BBC will fund making a custom ASIC :-) Well I see two systems, maybe they are redundant that way.
Why not make the project open source? Put it up on github.
edit: Here's the video I had in mind https://www.youtube.com/watch?v=51F7zNWqlgM I guess it's boring without understanding the audio :)
There's more on his channel
"When you do things right, people won't be sure you've done anything at all." - Futurama
I'm surprised, I thought in the UK these kinds of taxpayer-funded projects were required to be open source.
In any case, it's too bad. I was not familiar with the NICAM codec up to now, but I'm sure that software and hardware plans would have been very useful to some developing countries, many of whom apparently also use NICAM in their state-owned broadcast companies.
It could work if there were a subtitle. "35 million people didnt notice a thing: BBC Radio's NICAM Codec replacement project". I dunno. But it is (or should be) very important that the title actually informs you as to what the article is about. "35M people didn't notice a thing" doesn't tell you anything at all.
Edit: Upon further consideration I may have been a little harsh. This is really a blog entry more than an "article" per se, and I suppose that's fine for a blog post title. The problem comes when we link to it from somewhere else, e.g. HN; we need the additional context.
Granted, it might look "beautiful" in the eyes of a experienced RoR developer, but personally i find it just makes code very hard to read. Just my 2 cents.
It does one thing, web applications, and does it really, really well for the type of coder / team it was designed for.
People that belittle it by saying it has too much magic or whatever have never seriously tried to maintain a huge web application all by themselves. Web apps are ridiculously, mind-bogglingly complex, Rails conventions have been baking for well over a decade now.
If you're Twitter, maybe you have the resources to maintain a serious web presence without Rails. Everyone else is just handicapping themselves.
* Using a well-known framework is favourable over new shiny toys in a commercial system
* An ecosystem that has good 'defaults' is essential. A single web framework won't do everything for you. You need stuff around that for testing, deployments, etc.
For mainly the reasons above Rails is still my primary tool of choice. Yes, it has pinpoints but the reasons above far outweigh the new and shiny.
I found Rails during the 0.9 releases. Coming from the horrific world of Java web frameworks like Spring and Tapestry, it was a revelation to see how fun and productive web programming could be again.
When I saw things like 2.times, 1.day.ago, and ActiveRecord, I knew I was done with Java web development. This isn't a knock against Java per se, but I just spent so much time in the trenches having to do busywork and configuration that isn't necessary if authors of those frameworks put programmer productivity and enjoyment first.
Thank you DHH!
After a few strings of drama in the NodeJS land, a few chat with friends who used NodeJS in production, and seeing the pattern of BigCo who use NodeJS only for glue/front-end/gateway appserver, I'm pretty close to settle with Rails.
I admire DHH for his dedication to the framework for so long (since 2004). You don't see that type of dedication in the NodeJS land. Yes, NodeJS is still new, but a few BDFL/leaders of important NodeJS projects had left the boat already. One of the premier NodeJS vendors, StrongLoop, always mired with controversy (I happened to know a few people who worked for a company that was acquired by StrongLoop).
Building long-term side-project requires a stable platform because of the time limit (outside office hour + other responsibilities). Dealing with unstable but "sexy" technology is not a good choice.
We have been working with Elixir (the language Phoenix is built on) for over a year. It is an incredibly satisfying language and its toolset is surprising advanced for its age. Phoenix has been a very easy mental jump for our developers who were already trained on Rails.
For the new year, I'm thinking about the programming landscape and contemplating what I want to learn next. I've tinkered with Ruby on Rails in the past and think DHH is on to something with the doctrine he has laid out. I've only scratched the surface of Rails, but it still seems pretty neat to me. A deeper dive would be fun.
On the other end of the spectrum, Dart + Angular is appealing in part because I'm a fan of some of the Google developers working on Dart. While Rails feels like it is entering a comfortable middle age (or at least adulthood), Dart feels a little like a car being assembled as it cruises down the interstate. It's new and maybe a bit dangerous.
Both projects are super-appealing to me. Unfortunately time constraints prohibit me from chasing both and I'm interested in opinions especially of Rails people. Are you still excited about it?
At my corporate job, we use Java with Spring, and a hellish mess of node packages/libraries/whatever we're calling them now. It's not good.
So when I do side projects or consulting gigs, I always run to Rails which is what got me into development. It's very refreshing.
Reminds me of the jQuery post the other day. Rails is just a little older than jQuery. Rails with some jQuery sprinkled in is as close to development perfection as I've ever seen.
Also re: "too much magic" - Why is magic a bad thing? Creating layers of abstraction to ease building is what programming is really about, right? Why would we want to go backwards to configure everything under the claim of "I need to know how EVERYTHING works!"?
I don't think this is something to be admired in a language. Java is a bad language to work in because it's not expressive, not because it protects programmers (which it doesn't).
Thankfully, we are seeing this shift the other way with the growing popularity of languages like Rust and Swift.
I've never been more unhappy programming than when I touch Rails. I'll take python or Java any day of the week.
>Convention over Configuration
Discoverability is totally gone. Walking into a brand new RoR app and it's next to impossible to figure out what's going on. Not having a good IDE doesn't help.
>Exalt beautiful code
Write simple easy to read code. Don't be clever.
I strongly disagree with DHH's points 2,3,4, and 6. But I think I'm going to learn the language anyway and use it for some projects because I think it's important to not get in a rut. Using different languages changes the way you think about code, and if you are not constantly updating the way you think about code, you are falling behind.
Besides, I really like the way DHH writes. His language is persuasive even when I quite certainly disagree with him.
Still going to learn a functional language first probably. But then Ruby/Rails.
Just after waxing in section 1 on how Rails is designed to his own happiness (including adding quirky methods like 'fifth'), he says "One of the early productivity mottos of Rails went: Youre not a beautiful and unique snowflake"
"One example I always pull out is what you gonna call the primary key in your tables? When I was working with PHP and Java, every single shop, almost every single application, would have its own naming scheme. Some would say they have the Products table and then they'd have productid, others would have product_id, some people would have prod_id or p_id or P_id, and every time somebody made a new design decision it meant configuration. You now have to tell your models, your objects, how they're going to talk to this database table. Because it needs to know what the hell you called the freaking primary key column, and it just doesn't matter! Who cares what the primary key column is called? It just doesn't matter. It's going to have zero impact on the usefulness of your application."
"Dont repeat yourself is all about not having the same intentions spread out in multiple places. Don't have one configuration. If you're calling something, let's again take the example of the primary key. If you're calling that for id you shouldn't have to configure that in three different places that all have to work together and all have to be changed together. You should just pick one authoritative place to have that information stored. And then you can make changes from there. It also goes with the the whole Ruby idiom: we don't want those Java boilerplate ten line things: that's repeating yourself. If you have the same idiom, if you have the same intentions, that should really be exceedingly a short expression. And that goes up throughout the entire framework. Just keep one place to change those things, and keep the idioms very short."
Would it have made much difference if I had staid with ruby/rails instead of python/django? I dunno. Many of my friends have careers with rails, and they have nice careers. I have nice career as well, as do many other friends with django. In the end they are not that different, I guess.
I want to say thanks to DHH for his way to maintain the framework simple, despite to many PRs that try to make rails more complex.
Thanks to this document, I'd like to read also some document about the standard in big rails app.
DSLs are part of the reason for this.
Naturally there is a tradeoff. People that have been working with a particular DSL in Ruby for a long time can achieve amazing things in a short amount of time. Equivalent productivity is not really possible in Go (at least not yet, in my experience).
Having said that, I think it is easier to onboard new (and newbie) programmers onto a Go project than a mature Ruby project.
This is easily described with a contrast to Python. [...] Ruby accepts both exit and quit to accommodate the programmers obvious desire to quit its interactive console. Python, on the other hand, pedantically instructs the programmer how to properly do whats requested, even though it obviously know what is meant (since its displaying the error message).
By the way, python handles your example very gracefully in comparison to other REPLs (for instance, node spits out a cryptic, but expected traceback on exit/quit)
It's very interesting to see such different philosophies.
After a decade of me in PHP, his love for Ruby is a lovely selling point for trying out that language.
How is the Rails market these days? It seems (personal perception of outsider) that it has been replaced by Python/JS work.
Some people prefer the Rails way of doing things. Others prefer something else. Nobody is wrong.
No real ecosystem can be built on rails since its apis keep on breaking.
That's why in my experience, most people give up on rails.
Stop breaking apis, introduce stability and maybe the framework will be successful again. People don't give up on rails because of ruby, or because it's bloated or slow, but because it became an unmaintainable, non upgradable mess as time passed.
been using it in production for ~1y now.
The most magic thing in .NET MVC is Entity Framework, and all it's magic is either a bunch of strongly typed convention classes (of which you can create your own), or LINQ queries, which work pretty well.
If I'm building stuff for Adults with Money I'll choose static typing any day.
> We have to dare occasionally break and change how things are to evolve and grow.
> ...harder to swallow in practice. Especially when its your application that breaks from a backwards-incompatible change in a major version of Rails. Its at those times we need to remember this value, that we cherish progress over stability...
I encountered this doctrine first hand. A division of a company I worked at had its division's website rewritten in RoR by some outside consultants. Even brand new, it was a mess of Ruby gems of conflicting dependencies which was undeployable. The build system was a train wreck as well. Some of the main web pages of their web sites like rubygems.org were not available - the web site had "progressed" and when you clicked on a link, the page explaining anything had disappeared. I mean hell, go to rubyforge.org this minute (1:15 PM EST) - it is down, of course. The blog of the author of the Ruby gem that then was the main gem that handled web serving did not inspire confidence.
As I would have to maintain it (as a sysadmin, not programmer), I refused to deploy the system, since it was an unreproducable mess of conflicting gems and gem versions and kludges. They decided to pay the RoR consultants additional money, despite them having never delivered a working system. By the time I left, it still was never deployed. The division closed dozens of offices soon after, and then the division shut down - perhaps not completely related, but probably somewhat related to this project.
Meanwhile, the company's other divisions had decent programmers programming for Java application servers, and that code and those servers were much more solid. Builds rolled out easily (and could easily be rolled back if need be). It was really night and day. It makes sense this contempt for stability is explicitly part of their doctrine.
For people who have grown up in a hyper-connected always-online world, it's hard to explain the pure joy of hearing the sound of your computer picking up the phone and sending those tones . Because it meant going from isolated, disconnected and unitary to being part of a wider world.
Suddenly, everything was at your fingertips and it was intoxicating to me as a teenager. Fire up Trumpet Winsock and dial into the local mom and pop ISP. Suddenly you're surfing the early web using Netscape. Or open up WinVN and read some newsgroups. Or spend way, way too many hours playing MUDs (seriously, I think I spent almost every night MUDding during my teenage years).
Or learning cool HTML tricks by looking at the source of a page (back when pages were simple and you could tell things by looking at the source). Some of my earliest exposure to "programming" was because I wanted to make cool web things on my 1mb of ISP provided web space.
So yes, thank you Trumpet Winsock. Without you my formative years would have been very different and I likely wouldn't be in the career I'm in now.
In 1993 I was already using Linux, with an actual TCP/IP stack, not some bolted-on thing. In 1994 I was doing contract work on Linux already. One of the jobs was for these guys, still chugging along:
They employed a group of full-time people who continuously gathered new information about mining prospecting going on around the world, stuffing it into a database. This was turned into periodically refreshed web pages, for which subscribers could "click to pay". I hacked the CERN httpd to lock the click-to-pay data, and whipped up a billing system for invoicing customers. (Spat out TeX -> dvi -> laserjet: most beautiful invoices anyone ever got for anything.) I made a nice visual control menu for the whole system using a C program and ncurses, and even Yacc was used on the project for something.
One of the genius programmers on the database side claimed that "OMG, Linux causes data loss", because when the hundreds of megs of generated HTML was copied over to the servers (Linux ext2 FS), the disk usage was way lower than on the FAT. Haha!
In 1995 I got an Asus motherboard with two Pentium 100 processors, and ran Linux 1.3.x with early SMP support (big kernel lock heavily used). make -j 3 was only 27% faster than make.
In the 1970s Tasmania was the best equipped Australian state for computer based subjects. A lot of the schools had terminals to a central computer. Buses, I/O devices and assembler topics were covered as early as year 9 levels.
I feel like I don't fully appreciate the gradual transition from dial-up and Trumpet to LTE and a supercomputer in my pocket :) I wonder what people born today will experience that has as great of an impact.
It's doubly nostalgic to see it here again, 5 years later.
Edit: and there's still room on that donors page for any companies wanting to chip in something substantial.
Edit 2: 5 years, not 4.
That is to say, I was a tech support lackey, answering the phones and talking to dozens of dialup ISP users daily.
It was a small company, and of the three techs there, none of us were Windows users - two Linux, one Mac. Someone had helpfully printed screenshots of Winsock's various dialog boxes and taped them up around our cubicle. It was enough.
Hint: from scratch, reading RFCs, in Turbo Pascal, as a part of his internet newsreader project!
Also, http://petertattam.com is down currently, but
Looks like he still has that up at http://www.crynwr.com/drivers/
It would dial up a few times a day to exchange email using Demon's inbound SMTP (tenner-a-month account!), or one could laboriously route through it if one really needed something specific.
In summer 1995 they replaced it with an ISDN line.
Here's a short video  Shopify released last month about the transaction, where I reference how hard it was at the time to get online.
 http://www.nytimes.com/1994/08/12/business/attention-shopper... https://www.youtube.com/watch?v=eGyhA-DIYvg
I remember thinking "this is pointless" but went on to build my first web pages only a few years later (4th or 5th grade).
ftp.cdrom.com, metalab.unc.edu, sunsite.something, ...
Thanks for the free work,here are some stock options!
You're the lowly programmerz, I'm the IDEA GUY!
I think it competed with Trumpet Winsock. He had clients who used Trumpet Winsock but had problems configuring it so we helped them out.
It was later on with Windows 95 OSR2 that IE was bundled with it and it had a Winsock Dial Up Network stack that Internet in a Box and Winsock lost a lot of sales. I think they sold MSN subscriptions with it.
AOL and Compuserve competed with sending out free floppy disks and later on CD-ROMs. Then there was that $500 Internet rebate that made a PC basically free but had a $35/month dial up ISP bill to pay for it for five years.
But I remember people registering Trumpet Winsock for $25 and then choosing a mom and pop ISP. Trumpet Winsock was downloaded from a BBS, and was shareware and some ISPS gave out copies of it on a floppy disk when people signed up for service.
I saw this happen with Skype where I worked a couple of years. The company succeeded because of P2P: we grew with little infrastructure to reach 200M+ people. P2P became our DNA, rooted deep within (almost) every core component.
Then came the new wave of mobile messaging apps. We reacted... with a P2P messaging solution. It was obvious this wasn't working - you sent a message to someone from Skype for iPhone, and they got it... sometime.
We knew to have a chance against Whatsapp and other messaging apps we needed server based messaging, so we built it.
It took 3 years. Yes, it took this long to get rid of the P2P code from just the messaging components from the 20+ Skype products - we had 1,000+ engineers and 50+ internal teams by the end which significantly slowed things down. When we were done and popped the champagne - no one really cared.
And yes, the source code is still full of P2P references and workarounds to this date.
It is important to instead concede that you don't know the needs of the consumers in the higher level, and if you think you do it is because you are guessing. The only way avoid the problem is to not attempt to move into the higher level, at least not intentionally and not through business priorities.
This is extremely counter-intuitive because there are generally fewer expenses and greater market frequency at each higher level, which means superior revenue potential. Businesses exist to make money and to ignore moving up to the higher level means denying this potential (vast) revenue source.
This doesn't mean you can't move into the higher level of the stack and be really good at it. It just means you cannot do so both intentionally and as a business objective.
The solution is to double-down on where you already are with what you are already good at and focus on product quality of your existing products. Continue to improve where you are already good. Improvements and enhancements to existing products can gradually yield the next level as the improvements progressively open new potential in an evolutionary fashion. While getting to the next level this way is much slower it is also risk reduced and continues to associate your brand with quality and consume satisfaction.
This will only work, though, if the goal is improving the current product and not acquiring revenue in that desired higher level. Think evolution and not revolution. It has to be a gradual, almost accidental, increase of capability based on meeting current consumer needs.
If anything, large companies often miss out on new trends and changes in business and technology, but it's not solely because building that one new layer "up the stack" is so technically hard or different.
Apple's networked services have often struggled. But are they really higher level than the things Apple succeeds at? Asking whether enormous distributed data stores are higher level than Mail.app just seems confused. It's different, and it brings new challenges, but are they part of the same stack? And is the data ingestion and sanitizing that Maps struggled with higher or lower level than the client that was basically ok? You can multiply these questions and I'm not sure you can get good answers.
His basic logic was that - Success depends on processes- Processes even though might be thought of as abstract in reality are function of people at top. - Company gets successful because some bright guy is the rebel, he questions status quo, persists and succeeds. - As time goes by, the rebellious ideas actually become conservative ideas. The rebel is now on top. As his ideas fade he struggles to stay on top.- He recruits people who see the world through him, he builds processes that enforce that vision.- This makes it difficult for the truth to be visible to the top management.- By the time failure is visible it is hard to turn around the ship. - IN SHORT: Companies/Nations fail because someone at top did not know when to quit. - In the end that rebel turned conservative becomes bitter. He thinks the world owed him something for what he achieved.
He explained who USSR examples. How a genetic scientist got promoted because his fake research re-enforced something that Stalin had said long back and his peers were scared to point out the fact because it might get perceived as anti-Stalin.
I observed Blackberry very closely and it resonated to me so much. The founders at one point blamed people for using iphone and not blackberry.
Best companies in the world are seem to be those where their top leaders quit at their peak to make way for their successor.
I don't think that manufacturing semiconductors are comparable to building maps. Apple should have done a better job with maps, and even though they do complex manufacturing, likely should have done worse at chip manufacturing.
Iirc they brought in 3rd parties to help with the chip fab, and certainly spent more money building that core competency than maps.
I believe the author is correct that the issues is companies not fully understanding, and consequently underestimate, what it takes to be successful in a different arena putside their cc.
Google sees people as articles in a db. They dont understand people at all, they dont understand design as it relates to people, and they didn't understand that nobody needed another social network.
They probably underinvested (initially) in G+ and it was not a great product. It didnt achieve critical mass quickly, and thus had no chance of growing as a docial platform ever.
However, google is a lot more capable of creating something like this because they have all the core conpetencies down.
I guess my takeaway is that the companies can in fact take these arenas, but they underestimate the challenge. So to use a drug dealing analogy, they try to start moving bricks amd kilos, instead of working their way up learning the market pushing dimes and quarters.
They start too big, and when you fail big, you dont get the recovery of a smaller failure which affords small relaunches and features.
Tldr big companies try to enter at the top, cant recover from huge public failures, and either exit or buy in
Apple is a fantastically successful software and industrial design company. The vast majority of their production is outsourced. This is not vertical integration.
Additionally, actually, Apple has tremendous amounts of hugely successful and popular software.
Though I dig the underlying point of this article, that product management is hard, I think the examples are less than good.
Current usage of the database uses it as a loose, adhoc, difficult-to-maintain, polling-based API between multiple applications.
The future perspective looks back on our time, shaking its head at the way people use databases for everything in the same way that we shake our heads at bloodletting.
Oracle's business model is (1) convincing people to use platforms they shouldn't be using and then (2) selling the victims ongoing hacks and services to work around the limitations of the model.
Amazon's software services won't be build on a database. They'll be built using a decentralised messaging platform.
Well for one thing we know that Intel spends several $billion to open a new semiconductor plant and has a dozen of them already. https://en.wikipedia.org/wiki/List_of_Intel_manufacturing_si...
Whereas SAP is, well, a lot of software. Which is something, but Intel needs to make a lot of software too, and chip designs are in some ways a specialized form of software.
So I think in some sense Intel is strictly more challenging to replicate than SAP. (But this is probably just my misunderestimation talking. :-)
"What the article is referring to as stack fallacy is the work of Physics Nobel Laureate Philip Anderson: https://web2.ph.utexas.edu/~wktse/Welcome_files/More_Is_Diff...
Let's give credit where it due please."
Because even the author references competency-based views of competitive advantage, but for some reason ignores resource based views, and ignores the fact that companies might be aware of their competences. That is to say, I'm sure that large companies tend to mostly be aware of what their competences are based on the resources and knowledge that they have. If they don't have marketing departments that have analyzed the ERP market, sales teams with ERP training, tech departments with key HR, key knowledge etc etc, then I'm certain they are very well aware of this.
Maybe some companies have had marketing missteps and have made poor strategic and competitive decisions, however, but I really doubt that it's due to a lack of introspection or simple analysis as described.
Also, IBM didn't "think nothing much" of the software layer. They misunderstood the nature of power in the supply chain, and most importantly, didn't solidify their position within the supply chain while they were dominant.
A related factor is that larger companies tend to be more specialized (formalized processes, specialists, focused teams/departments, and so on), meaning they can be prematurely optimized with respect to new goals and poorly equipped to conduct the necessary roaming.
Wasn't IBM a classic case of not trying to build the layer above them on the stack?
The Wikipedia page on IBM PC DOS even claims that their "radical break from company tradition of in-house development was one of the key decisions that made the IBM PC an industry standard".
Here let me make an article... wait wait... ah... "Big Companies FAIL" that sounds like nice click bait. Now... hm, let's invent some stupid word to pad it out how about the 'Stack Fallacy'. Programmers will dig the 'stack' part. Yeah. Ship it!
Seriously, this article is content free.
People make products. Sometimes they work... sometimes they fail.
If you pretend you have some magical insight into why they fail or succesd with gems of wisdom like:
found it very difficult to succeed in what looks like a trivial to build app social networks.
The stack fallacy provides insights into why companies keep failing at the obvious things things so close to their reach that they can surely build. The answer may be that the what is 100 times more important than the how.
Really? What you build is important?
Why is the top of the list this morning?
THIS! +1000! I would even leave out "often", or at least replace it with "usually".
And in what order.
We run an eCommerce platform, have a variety of clients using it, they have a number of customers. At least once a week we get an accusation either from a client or an end-user of some outlandish nefarious behaviour, usually due to some complete lack of understanding of the nature of technology.
Way back when, we responded, tried to help, tried to explain, but it tends to be the case that if someone has made their mind up, they've made their mind up, and anything you say can and will be used against you - confirmation bias is a harsh mistress.
The best response is usually no response, I'm afraid to say. It's a drain on your time, they won't be any the wiser unless you're prepared to sink serious time into educating a stranger, and more often than not responding results in escalation, and people doing stupid things like involving lawyers and law enforcement.
Case in point: About six years ago, we had an older guy phone us up frothing about how we'd hacked his wife's computer and she'd accidentally bought something from one of our clients. We explained that it would be hard to accidentally enter your address and credit card details, and that if they didn't want the order they should contact the merchant, not the web developer (they clicked our "ecommerce by" link in the footer of the client site - we don't do that any more!). We thought that was that. A week later we got a stern phone call from an ombudsman who wanted to know why we were ignoring the distance selling rules and taking advantage of old people... and they didn't understand that we weren't a merchant, didn't place an order on their behalf, either - so months of time were wasted, and we narrowly avoided ending up in court over a non-issue.
Anyway. When you have a conversation with an idiot, nobody watching can tell which one of you is the idiot.
Yet the founders had a collection of letters from people, actual hand-written letters, asking for help with their hacked computers, asking how to hack, at least one probably-paranoid-schizophrenic one about... errr... hacking and the government and chips in brains and all that sort of thing and whether or not this company could help protect them against the hacker aliens (I don't recall the exact details but this is not an exaggeration of the flavor, alas).
There's an amazing amount of this sort of thing going on. At scale the only thing you can really do is ignore them; engagement doesn't go well for anybody, even the sender just ends up more frustrated and angry than when they started if you try so it's not even good for them. On an isolated basis you might get lucky, but don't count on it.
Edit: Kinda commenting on the thread above anchored on madaxe_again's comment, let me emphasize that I'm not saying ignore it because you can't be arsed, or because replying is beneath you, or because elitism... I'm saying that ignoring it works out best even for the sender, which is why you should do it. That it happens to be the easiest course of action for you as well is just one of those rare times when the easy action also happens to be right.
"My facebook suddely split in half and this screen popped up with all these random cyber space options and it was like watching and assessing things soooo weird? and talking about child... and children being forced WTF????? is this some sort of cyber police thing that my IP was accedently allowed to access so i could help stop child abuse on the net or am i going crazy???? has this happened to anyone else??? - :(( - feeling confused".
[what happened, was that this person most likely clicked F12 or Ctrl+Shift+I - and brought up the chrome/firefox/etc. developer console]
Thanks to this, I get about a dozen emails a week from people asking for Waze help. (Lots more when Waze changes something, like hardware support for a particular device!)
I've tried contacting Google (either to get these people help, or to get my email address removed...) with no luck.
I empathize with Daniel. It's an unexpected downside to open sourcing something and asking for credit.
The only way to win is not to play.
Bless him, the guy mostly kept on signing off his emails as "John," having forgotten to change his name in the "From" field, except for the time he forgot and signed his "real" name again.
(I say "emails" - it was a bizarre few exchanges, starting with "I have a job for you," and myself replying to his opaque emails to find out quite what on earth the guy was on about)
EDIT (thanks mariuolo): http://www.theregister.co.uk/2006/03/24/tuttle_centos/
Some years ago there was a similar issue with the default Apache website on CentOS. Somebody that their webspace being reset by their hoster, but rather than complaining to the hosting community, the user complained to the contact info shown on the default website, claiming they had hacked their website. (Sorry, couldn't find the link of that story anymore.)
As an aside, hopefully someone can recommend to the emailer as well to use a different service to host high-quality versions of her photography so that potential clients can evaluate critical clarity in her technique. I'm not sure I'd want to rely on Instagram as the sole example of my work, but, maybe she's targeting a different clientele that I'm imagining.
As an aside (#2), who hacks Spotify accounts?
You have contacted the software design company that licenses these commerce systems to merchants. We do not deal with the merchants' customer service. Please directly contact the merchant instead.
This is an automated message and you cannot reply to it.
Intelligent people don't bother reading them closely, and the people who have read them often use the information contained in rather stupid ways.
Granted I have spent many hours over the years reading contest / entry rules and ToS type documents, so I'm pretty comfortable picking on myself a bit here and there. Often I've read a very clear ToS and then observed the responsible company basically disregard their own rules and stated processes. Two notable examples were for a Deadmau5 remix project (he 'lost his laptop' and they stretched the contest for a couple months, barely supplied any promised materials, etc) and a Local Motors contest (routinely lied about what they were looking for as judging criteria, then claimed to contact winners on day X to start authorization process, instead vetted winners in advance and then used day X to announce). I've used these experiences to temper my trust of any online engagement or contest, because it's nearly impossible to hold any provider accountable when they're dishonest or just inept.
Antagonizing bothersome people is a form of entertainment  from time to time. If there's nothing to be gained from actually being constructive, then being obtuse might be the most worthwhile course of action. YMMV.
It reads like text generated by a computer
I tought it might give me more users... What it gave me was lots of angry emails once someone used my mail client to send out spam. That header was gone pretty quickly.
Here's what's going on. Her IG account may, in fact have been hacked. This happens. She's obviously afraid and angry. She is the kind of person who thinks she can solve all of her own problems, and found the licenses section of the app, which included something with a nonsensical name (libcurl) and a domain "haxx.se". Despite having known that haxx.se is for libcurl basically forever, I occasionally see it and associate it with gray or black hat stuff before I remember. So it's not at all surprising that a non-initiate saw this and thought it might have something to do with her IG account being hacked.
Daniel says his reply to her original email was "clear and rational". It should have been "understanding, compassionate, and patient". This is someone who is seriously freaked out, because her livelihood is at risk, and based on the fact that she went digging through the app, she is probably having what a shrink would call a "crisis of control". So here's what the author should have done:
1 ) Patiently explain what libcurl does (it let's programs request web pages, just like a browser). Explain that he's the author, but he's given it away for free. The license is in the app because he took pains to ensure that nobody can package it up with some slick marketing and sell what he's giving away free.
2 ) Acknowledge that haxx.se sounds kind of shady. Explain why he chose the domain. Self deprecating humor would be great here. Explain that despite this, all kinds of apps use libcurl for perfectly benevolent purposes.
3 ) Explain that he has nothing to do with instagram (commenters have suggested the car parts analogy, which seems like a good plan).
4 ) Finally, and most importantly, link her to their hacked accounts page! They have people paid to deal with this stuff, who are much, much better at dealing with panicking laypeople.
There is a lot of "reason good, feelings bad!" stuff in the tech community these days. It makes people see us as a bunch of borderline autistic, self centered, stuck up, evil nerds. Many of us, myself included, were terrible with social interactions and dealing with our feelings at some point in our lives, so the finer points of human interaction and emotional thinking left a bad taste in our mouths. But we've all grown up. We aren't social rejects and evil nerds anymore. We have lives, careers, friends, and family. We need to let go of the stuff we suffered in our youth, forgive those "stupid popular kids", and learn how to be nice.
 In the sense of the popular conception of borderline autism, not the clinical condition, which generally doesn't make you a jerk.
It's pretty unnerving because the person clearly needs some assistance but also thinks you are the bad guy.
It's like he makes paint. He mades this really cool shade of red paint that everyone likes. One day, some really mean dude used this guy's paint to paint his car red. He then used that car to go on a crime spree. The victims of the crime then went to the guy who made the paint demanding their stolen money back.
You keep using that word, I don't think it means what you think it means.
In all seriousness though, she went to the ToS for help with the Instagram app? Why not write Instagram support directly?
Would it have killed him to mention this?
"Dear Photographer Lady: I run a very well-regarded library, you may have had this reaction because I have the tongue-in-cheek name haxx.se [alternatively: because of the coincident name haxx], however I am a well-paid consultant similar to yourself and other than this choice of domain name there is nothing alarming. The library is famous and you should see a similar notice in all of your friend's phones (or anyone else's you check). It is in use by major corporations including Apple and Spotify. Sorry about the confusion."
that's literally all this is about. (obviously.)
In fact, it makes me seriously question the author's good faith that he ends with the call-to-action "Ive tried to respond with calm and clear reasonable logic and technical details on why shes seeing my name there. That clearly failed. What do I try next?" without mentioning the elephant in the room.
The ChakraCore roadmap  shows that they plan on porting the interpreter to Ubuntu, but not to Mac OS, and they don't plan on porting the JIT. So even it does go Windows + Linux, it will still be a low-performing toy on Linux.
This is vaguely interesting from a technical point of view, but doesn't seem like it'll majorly impact the future of Node.js, except maybe as something Microsoft can offer for Azure customers that choose Windows Server.
If V8 changes its API drastically (which it frequently does) this little experiment is basically over. There is only so much work people can do to keep up with a moving target.
Barring NodeJS creating an abstract JS engine API that developers can create v8/chakra/spidermonkey adapters for I can't see this being a success.
That being said, I hope that Node and V8 coupling becomes less tight and the interface between the two does become abstract so that we can all benefit from a choice of engine that implements the requirements of NodeJS.
But this is Microsoft going out of its way to provide a swap-in engine for Node, with a focus on optimizing the engine for Node through benchmarks, and cross-platform builds in the pipeline.
Hopefully, Microsoft will be able to achieve the following:
1. A smaller Node binary through a stripped-down engine.
2. A GC optimized for Node and server applications.
3. An engine optimized for Node, working closely with Node's core technical team.
Perhaps this might encourage Google to do the same.
Supporting additional JS engines would ultimately lead to a healthier ecosystem and higher quality JS implementations.
Great work everyone.
Who has the fastest JS core these days?
A few questions though. Node tracks V8 releases, so the first thing I'm wondering is whether ChakraCore will continue emulating V8 APIs into the future? What happens when ChakraCore adds breaking changes which V8 doesn't support?
I presume MS will be supporting most development on the engine, as they benefit from IoT Core and other applications relying on Node. If Mozilla were to do the same and add SpiderMonkey, this would likely see the 3 major vendors accelerate adoption of JS standards further. I'm more bullish on JS development into the future.
If I were to predict, I'd see Node switching to ChakraCore as its primary engine in a year. V8 has been 'we build you follow'. I know Domenic and other Google developers have helped with the relationship with Node contributors, but what will happen when MS offers a less-maintenance-prone engine? Embrace, extend, extinguish!
I guess Google has a few conflicts of interest with Go and Dart.
Please do not post comments on the GH issue unless you have something important to add. These issues gain a lot of attention and it makes it _incredibly_ hard for collaborators to communicate.
Locking the issue to collaborators means other people from the outside who have a significant contribution or want to help can't do that.
Comments like +1 -1 and such create a significant amount of noise.
Support open source, keep the discussion clean.
It was a monumental task to try to create something that would please both front-end and back-end web engineers. Another issue with Meteor is that it was envisioned, not extracted.  From Rail's creator, DHH: "First, Rails is not my job. I don't want it to be my job. The best frameworks are in my opinion extracted, not envisioned. And the best way to extract is first to actually do."
Meteor was the goal, not an actual, real-world application. Often when this is the case the software ends up solving a bunch of problems that seem logical to solve, but in practice are not actually practical (another framework like this that comes to mind is the notorious famo.us project). Compare this to Rails and React which were forged in the crucible of real, day-to-day development and problem solving.
 - http://david.heinemeierhansson.com/posts/6-why-theres-no-rai...
"Blaze is threatened by React". You can use React or you can use Blaze. If React becomes so popular that Blaze is not longer used, that's OK... nothing to be threatened about. It's nice that Meteor can move with a trend.
"Tracker and Minimongo might eventually disappear as well". Tracker and Minimongo aren't giant stick bugs near Australia that need to be preserved. It's ok if they are replaced. They are internal tools for Meteor provide its "reactivity". I doubt reactivity is going away.
Other non-scary things: Routing is solved by community packages. Pagination, forms... really? Server-side rendering has the "spiderable" package, but the SEO / server-side rendering problem isn't unique to Meteor.
The database issue is valid. Meteor uses MongoDB. But, you shouldn't go down the Meteor road and try to shoe-horn a relational DB into non-relational DB, then say WTF. You knew from the beginning that non-relational DBs have their own set of problems. My limited understanding is that MongoDB was picked because it was the easiest way to get the reactivity that the MDG was looking for. Meteor road maps says SQL support is on its way.
I don't know where the OP is going with this. Maybe this is the part 1 of the late night TV commercial where they list all of our problems (think Slap Chop), then in Part 2 he'll solve all of our Meteor problems if we buy the next book he writes.
It's incredible how accessible Meteor is for this purpose:
* One line complete local setup
* One line deployment (to Meteor servers, but still)
* Out of the box User Accounts
* Simple templating engine with Blaze
I can think of no other framework/language combo where true beginners can deploy a live, database-driven website in a couple of hours. I really wish Meteor could focus more on this aspect: becoming THE entry-level framework for learning web development.
However Meteor's business model is around hosting, so it's inevitable they move further and further towards the needs of professional, rather than entry-level, developers. And this takes them further into the areas outlined in this article where they are currently weak.
What happens if suddenly I want to rewrite part of the back-end or part of the front-end with something else for various reasons ? What happens if I want to switch from MongoDb to RethinkDb or Postgres for some reason ? It's good to have default choices but it looks from the outside that the default choices with meteor are pretty fixed.
But maybe I'm wrong, that's just how it looks like from the outside.
A more general answer, I feel like the web programming world doesn't really need new frameworks, does it? Rails or Django got mainstream adoption because there was a need at the time, likewise with frontend JS frameworks (which seems to be consolidating around just 2 - React and Angular), and likewise with Node as filling a need for easy async. I'm not that knowledgeable on Meteor , however I think by default it's reasonable to expect no new frameworks to have mainstream adoption without a major change to the web.
 I don't know if Meteor's x-platform appeal is enough to convert users from other x-platform, native and/or hybrid solutions (Ionic, Titanium, RubyMotion, etc.).
Routing? Really? I am not an expert but I think if routing, of all things, is hard in to do in your web framework, you most likely have a problem. That's requirement zero!
I just spent a good amount of time over the last few months building a prototype for an application in Meteor and it has been a joy.
Out of the box I got happy, grokable app/server communication, I got sane user account tools and I got a build process that works well enough that I haven't thought about it at all. I almost never need to look at documentation, I just build features. I've only needed a few community packages, and the ones I have used have been working pretty well for me.
I feel like I've been living the dream. So much ceremony and overhead just melted away.
I'd be curious to hear what kind of issues people have hit with blaze/meteor package management/etc that make them want to swap in react/npm/etc. (I spent the better part of 2015 with react/flux and it would take a lot to get me to switch back).
Instead, I just utilize `Meteor.Methods` for all client/server interaction. I actually think it's pretty nice because you define the method on the client as a "stub" that gets called as it waits on the server response. I think the tutorials and guides focus too heavily on their fancy client/server mongo magic.
So I said, this is pointless let's just simplify things with Meteor + Blaze. Now Meteor says, it's "threatened" by React?
The big problem i see with meteor for the future, is that MDG has/will have to answer to its shareholders rather than its community.
I completely fail to understand why some developers seem to be allergic to learning about data structures and the means of query-ing them. It's not really that hard. If you can JS, you can SQL.
It appears that 2016 is the year when the hype from NodeJS died down and vendors/OSS-community must now deal with the hard, un-glamorous, work to clean up, maintain, prepare roadmap, and so on. Which is good!
Or it could also be the year where people realized that the use-case for NodeJS is fairly very very specific/niche (unless if they love JS so much that they're willing to absorb the pain using NodeJS + its ecosystem).
SailJS vs TrailJS, ExpressJS is dying, NodeJS vs io.js, StrongLoop+IBM fiasco (and SL reputation), now MeteorJS!. I had high hopes that MeteorJS can be one of the premier NodeJS frameworks (Rod Johnson, who created Spring Framework, invested in the company. I hope they listen to him...).
I love to see competition in NodeJS ecosystem and hopefully sooner than later, a few solid options emerged because right now, everything is not a good option except barebone NodeJS and roll-your-own-framework.
All the things mentioned as "going beyond basics" seem like things that meteor was never designed for and that we have other tools to handle.
I really feel like the problem is with the people behind meteor trying to make it THE framework, instead of just being a framework that excels at a single purpose.
When I once had to replace a very old legacy system (non-web, but it still holds) I've just slapped a dummy do-nothing proxy-like system in the front and gradually did stuff piece-by-piece. Then threw out the old garbage when it wasn't doing anything anymore.
But from what little I understood about Meteor from the tutorials and examples, there's one single giant system, that you either use or don't. This means, if one hits some bug or - worse - architectural limitation, they're going to have really tough time.
 for instance, I made it trivial to invoke client functions asynchronously from server code using normal call syntax. Now how to deal with timeouts? The syntax doesn't permit much without leaking, destroying the original point.
But, having said that, there is still a very bright future ahead for Meteor. The people at Meteor Development Group have identified these pain points and have plans to improve some of it. There are various tasks that they are working on now that will change Meteor a bit, but also make it a better platform (in my opinion). Things such as better NPM / module support, support for multiple different types of databases, faster build times, better testing support, etc.
The platform has been around for awhile and I still think that it can reach the goal of being the go-to JS platform for building cross-platform applications.
I just don't think it's as good as the other options technically (does not scale well as a server, which is a problem for most applications).
More importantly, as a developer, I do not believe VC money should be bringing opinions to the open source community (which is what this is) is a bad idea. The whole point of open source projects is to have natural selection happen. Meteor having VC money allows them to get it wrong but still market it etc... Which just seems like a poor idea.
So I got to work on some more advanced features I'd been thinking about. And at some point, Meteor started throwing an error from somewhere in its innards, and for the life of me I couldn't figure it out. Some kind of problem mapping data to UI, I don't remember the exact message.
I decided I needed to know the guts of Meteor to be able to debug problems like this, and put the whole project aside to wait for the Meteor in Action book. But now I'm onto other things.
I did some research on related areas a few months ago.My ideal stack at the moment would involve:
* redux or cerebral with immutable model
* css modules
* a realtime-enabled version of falcor, which doesn't exist yet.
The last bit is still the missing piece for realtime, as far as i'm aware. Neither GraphQL not Falcor seem to been designed with realtime model updates (via websocket) in mind.
Eventually you might want to scale and then, as in any library/framework/platform things will get a little more complicated. I don't see it as a problem, really.
Meteor always was opinionated and maybe some opinions were unpopular from the beginning (MongoDB, for example). But when you start adding multiciplity (React/Blaze/Angular for the frontend, Iron Router/Flow Router/React Router for routing, Meteor Package System/NPM...) you can lose the track. I'm afraid this is happening now.
Meteor JS is best thing I picked up In 2015.
It seems that many people are unhappy with Meteor in its own way. It takes 375 factors to make a 'Rails' and Meteor has 370 for each person, but each person is missing a different five.
> This creates a bigger barrier to entry compared to front-end frameworks like React and Angular, or server languages like Go or Elixir.
Okay, Meteor has an arguably bigger barrier to entry than React or Angular (maybe), but definitely 100% not Go or Elixir. I think this is just disingenuous.
> I believe some of Meteors early choices may have ended up handicapping it. For example, Meteors focus on real-time applications makes them a lot easier to build compared to any other platform out there. But it does also come at the cost of hidden complexity, and potential performance problems down the road.
This is the #1 problem of every framework, ever. Mr. Greif is not saying much, if anything at all.
> Once a new Meteor user starts to go beyond the basics and look into things like routing, pagination, subscription caching & management, server-side rendering, or database joins, they realize that the difficulty curve quickly ramps up.
Here, he's conflating things that are easy (routing and pagination) with things that are hard (subscription caching), so it's hard to see exactly what the criticism is here. Not to mention that Iron Router are pretty mature. I haven't run into a routing issue yet that it couldn't solve. As far as joins and caching, etc., these are definitely difficult things. I don't think any framework out there completely (and in the general case) solves these out-of-the-box. Maybe someone could introduce me to one.
> The result of all this is that Meteor has ended up in an awkward place.
I think it just ended up where almost all other frameworks end up: useful, but not completely generalized. In fact, I think striving for a very high degree generality might be a mistake, lest we want to end up with something like Hibernate.
We bet on Meteor and it's doing phenomenal. The development experience is great. I haven't been this happy and impressed since rails .9 ripping me a new case of programming love.
Look, meteor has it's downfalls but so does any stack. There's nothing working against you, it's all just code. There's no magic. You don't understand subscriptions, merge box, and how to handle joins? Level up a little bit. It's not a hard thing to figure out. Open source code makes it easy to dig in.
Yes, mongo is looked at in bad light, it's also incredibly powerful when used right. The trick is to not be oblivious to the tools you use and work around it when it matters.
The most important thing for a product, startup, and company is the ability to make the right calls and grow at the right time and then be able to handle and accommodate problematic areas when you need to.
Does meteor scale? Yes. Does it have a theoretical limit? Yes. The trick there is to be cognizant of the limit and plan around it when you are approaching.
Oh you want to scale out your processing? Easy. Fan out some back end nodes in whatever language you want. Pull from mongo, do your crunching, and then feed it back in. Then meteor will sync it to all clients. This is pretty powerful. You can completely swap out and fan pieces out with meteor.
We have a massive app in meteor. It's also architected to be modular if we have to break it up.
The isomorphic nature of the code base has created pleasant APIs on both the front end and back end and has drastically simplified a real time app experience. Meteor sets a new standard for web applications and demands new perspectives. When you embrace it you get an ecosystem that is quite revolutionary to work with. Everything from the web-app, latency compensation, pub/sub, down to Cordova builds.
Overall, we are very happy. Yes things can be better, and we are looking forward to seeing meteor evolve.
Meteor shouldn't focus on supporting new view frameworks like blaze vs react. Just focus on adopting the tools out there and making it better. That's why meteor doesn't need blaze. Meteor doesn't need a router. Meteor should pick the best open source versions of that in the Js community and make it easy to work with inline with the "vision". Meteor should focus on the developer UX. This is their power. An opinionated framework that removes the stress of configuration and has tools that fit really well together to allow an individual, team, and company to focus on building what matters: features.
The vision is grand and I believe it in and I'm also not worried about scale. People said the same shit about rails, and x, y, z. Scale is a good problem to have and when you have that problem you'll figure out what you have to do.
If anyone has any questions about meteor architecture feel free to ping me.
Can someone point to the backstory here?
A year or two later. "Project Foo: Total Crap". Glad I didn't look into that.
Relational DBs exist for a reason.
Being stuck with Mongo is the major reason why I wouldn't use meteor again.
The other stuff was minor and had solutions. We also built an open source library that wrapped up collections well.
Curmudgeony security issues aside, this undeniably feels like The Future and a big deal to watch out for. It's also one of those cases where a creator / maintainer makes a huge difference for long term viability in my opinion. Feross is crazy smart and has been working with all the related tech for a while now (via PeerCDN, Instant.io, etc, etc), and is just an all around respectful, nice guy, which is important for the continued development / community aspect.
We're really at the mercy of open platform-minded engineers at Google, Apple and Microsoft though! I wonder what we can do to help support those folks.
Unfortunately, after a certain file size it'll just crash your browser. It'd be great if there was a way to work with large (+2GB) files.
- Where is the downloaded data being stored? With a traditional bittorrent client I the data is written to disk. Since JS doesn't make raw disk access available, I'm assuming it's being kept track of in through some js api that tells the browser to store this data. What API is it using?
- Even when I finish downloading the video, the player doesn't allow me to seek to random positions in the video. It displays a "this is how much is buffered" bar that is way smaller than the green bar at the top of the page indicating download progress. Why is this the case?
- As you can see in the screenshot, there's lots of nodes that are labeled with ip addresses that are not visible to my computer at all. Is this because the displayed ip addresses are self reported?
 - http://nacr.us/media/pics/screenshots/screenshot--17-46-37-2...
Anther question: How do I open the file once downloaded ? (I use ublock, should the file be displayed in the rectangular area next to the graph ?
C/D letters come with a 200-1000 fee depending on the content and now it's trivial to make someone download stuff illegally in the background.
Correct me if I'm wrong, but this poses a problem if you ever want to take WebRTC further (i.e. in a self-hosted mesh network).
2 looked at network traffic and it seems to open separate TLS sessions per transferred data packet, not the most optimal thing to do, might be an artefact of being hosted on https. Probably a cpu bottleneck right there.
3 doesnt store anywhere (local/session storage).
My idea was a browser-plugin for youtube, that would take the downloaded video and start seeding it. On the other side, if a video has been blocked by YT, it would automatically use the torrent version.
- No support even in modern browsers by default 
- Don't want to [maybe] get into legal troubles if it's wrongly used
PS, apparently the caniuse info was wrong, since now it appears in green
Funny, Fx44 does support WebRTC
You can tag or organize the data locally and cache it, or return it sorted to the nodes which serve it to others. People don't give a shit about webpages for search, they care about information. The web is a big rss feed, and our old feedreader "google" stopped doing that well, and also we pay a massive privacy tax for that now.
I see this happening in ~2 years for really techie people and being standard in 5.
edit: elastic search, webkit, real time, distributed file systems, apache spark, google tensor flow. These ingredients will be used to make the new browser which browses information and returns that information not the actual web pages.
Complains it cannot play the file for not having Chrome with Mediasource. Why not serve an ogg or webm for crying out loud?
Also, why auto-start the download?!
After the download is finished, where can I watch the video? There's no link for watching it anywhere.
If I refresh the page the download starts again.
I realise this is just an experiment and kudos for that, but the author could have made some better choices re above.
"The official discovery date is the day a human took note of the result. This is in keeping with tradition as M4253 is considered never to have been the largest known prime number because Hurwitz in 1961 read his computer printout backwards and saw M4423 was prime seconds before seeing that M4253 was also prime."
The number is 2^74,207,281 - 1. Naively stored like a normal integer in memory, that's roughly 75 MB of RAM just to store the number. Compare that to your 64-bit time_t and imagine how long a duration of time could be represented by such a large number. Stored on 3.5" floppy disks for some reason, that's roughly 50 disks just to store the number.
Edit: Duh, bits, not bytes. I leave my mistake here for posterity.
"It is not known whether or not there is an odd perfect number, but if there is one it is big! This is probably the oldest unsolved problem in all of mathematics." 
How does this work? You just try to divide by every possible number and ensure the result isn't an integer?
Incidentally, the project source code contains one of the fastest Fourier transform implementations that I'm aware of.
What are they thinking? They can do better than this. Much, much better.
This is a problem if you are writing an app for a client who doesn't want to publish on the app store. The only option is the very expensive and opaque volume-licensing program which is useless if your app won't have many activations anyway.
Ultimately Windows 8/10 is a closed platform similar to iOS. Are we as developers willing to pay 30% of our revenues to Microsoft when the day comes that UWP will replace .net/win32?
His "Website constructor" app is in 3rd position when I search for "Website". Not that bad for such a generic keyword.
Also, I guess one thing with these apps is that the price range (between $12 and $20) is a quite high compared to the majority of the other apps on the store and (unfortunately?) a lot of people are not ready to buy apps >$2.
I guess that the store search algorithm depends on the conversion rate (people buying the app VS people trying it) and that conversion rate is probably pretty low, so maybe lowering the app price could lead to more sales and higher ranking.
I've come back to full-time Windows development after over a decade of PHP and Rails, and you could lift my experience with trying Entity Framework and lay it on top of these complaints, and not notice a difference.
I've always said this about Microsoft products: they make it really easy to get to 70% of what you want, and then make it nearly impossible to finish. There's a massive bend in the effort/results curve. With open source, it's more difficult in the early stages, but it's a steady progression to 90%, and then you have the tools to work out how to get exactly what you want, if you want to make the effort.
Recently I contacted Microsoft technical support over a product that has some serious issues. Basically they tell me that they have no intention of paying for the repair even though the product is under warranty, in blatant violation of European law.
My partner has a laptop with an Core i3 processor and above-average RAM, and is on the whole a beast compared to my Dell Chromebook 11. My partner upgraded to Windows 10 and I have since used it occasionally to check something here or whatever.
From a couple of admittedly brief sessions using it, I have noticed that Windows 10 seems extremely "laggy", for want of a better word, as though there is a 100ms delay between clicking something and that click registering. It's nimble enough at starting up, it's just interacting with the thing that seems to be slow.
Anybody else experience this, or am I just used to instant feedback from UI?
Sounds like side loading is a better choice on both OSes.
After moving abroad I was unable to access my Microsoft account(locked), for more than a month, just because I couldn't provide them with the last IP address I had used to access my account, along with last 5 received email subjects, 5 contacts in my contact list among many other unreasonable questions the common user doesn't even know how to answer. All this because I didn't own my recovery email address any longer.
I can't tell you how many times I saw an article about an app and tried to find it in the store, only to come up empty handed. This has happened so many times on my Windows Mobile 10 phone, I lost count. Just this week I was looking for a better Twitter client app. I would type in "twitter" "tweet" "twitter client" "twitter app" and all I would get would be the main Twitter app and nothing else. Then I had to start doing Google searches for "best twitter client for windows mobile 10" which then gave me articles from 2013 and 2014 and for WP 8.1 - not exactly an up to date list of current Twitter clients.
Compare that to the Google Play store. You simply enter "Twitter" and get a dozen other apps, none of which have Twitter in their name. Plume, Echofone, Fenix, Talon, Persiscope are just a few of the examples.
It was such a massive headache, I actually decided yesterday I'ms scrapping Windows Mobile 10 and going back to Android. I've been a huge supporter since WP 7, and held out hope things would improve, and they haven't. Their app store is a massive failure, you can't find anything in the store you want, developers have no reason to build for the platform, and I'm not going to start with the myriad of UI problems I see already in the latest build (build 10586.63) that still have not been solved. Just basic shit like battery life is still a major problem. Not to mention they took away some very basic features from 8.1 that everybody loved like the ability to show Bing weather on the lock screen.
I've finally reached my breaking point with their platform.
Windows is dangerous and 10 just made it more so. It's not easy, but break free of the proprietary os chains now before they lock your brain in with iBrain and update it without your permission.
To make your window resizable (!) and responsive, use the following XAML
<VisualStateManager.VisualStateGroups> <VisualStateGroup> <VisualState x:Name="wideState"> <VisualState.StateTriggers> <AdaptiveTrigger MinWindowWidth="641" /> </VisualState.StateTriggers> </VisualState> <VisualState x:Name="narrowState"> <VisualState.StateTriggers> <AdaptiveTrigger MinWindowWidth="0" /> </VisualState.StateTriggers> <VisualState.Setters> <Setter Target="inputPanel.Orientation" Value="Vertical"/> <Setter Target="inputButton.Margin" Value="0,4,0,0"/> </VisualState.Setters> </VisualState> </VisualStateGroup></VisualStateManager.VisualStateGroups>
theres also this gem:
bmp Windows::WinForms::XAML::Imaging::Bitmap = ^new Windows::WinForms::XAML::Imaging::Bitmap(filestream)
- You must pick up Clojure to understand and configure Riemann (we're not a Clojure shop, so this is a non-trivial requirement)
- Config file isn't a config file, it's an executed bit of Clojure code
- Riemann is not a replacement for an alerting mechanism, it's another signal for alerting mechanisms (though since it's Clojure and the configuration file is a Clojure script, you can absolutely hack it into becoming an alerting system)
- Riemann is not a replacement for a trend graphing mechanism.
- There are other solutions which can be piped together to get the 80% of the functionality we wanted from Riemann (Graphite + Skyline) in much less invested time
Skyline link: https://github.com/etsy/skyline
I like it so much that I did an experiment to implement it in C++
My implementation sucks, but I had a lot of fun working on it and I got to learn how Riemann works better.
There's a sample chapter available, which covers the initial Riemann implementation and a Clojure "getting started" guide which should help anyone - even if you're not interested in the rest of the book! :)
If you're coming from Nagios (or not), and you'd like something that will schedule Nagios event scripts (and others) and send them to Riemann, I have been using this in production since mid-2013: https://github.com/bmhatfield/riemann-sumd
It allows you to tap into the huge ecosystem that is Nagios monitors, without requiring any other Nagios component at all. It just translates the output into a Riemann event.
*at worst, it's a mangled subset of clojure with extraneous dots and the parentheses in the wrong places :-)
It's been a breeze, rather worry free and its very good collectd support has enabled us to cover very interesting use cases at Exoscale.
Riemann rocks, just not as monitoring system.
Bosun link: http://bosun.org/
Performant - In the realm of 6 oid monitoring of 50,000+ devices in 5 minute intervals
I like the look of the config structure, being Clojure.
Jokes aside.. while the stream processing on events seem powerful there was something similar in graphite but probably not as advanced or easy to use. However the push approach brings its own limitations specially on existing setups.
Another point: the court has no enforcement mechanism. I suspect if all the major European powers disagree with one of its rulings, they will easily be able to flaunt it with impunity.
And they've done it several times before, but governments keep ignoring the rulings. I wonder if entire governments can be jailed for contempt of court...
1. Facebook made users email the regulator on a subject of "tangential relevance" - saying they support Free Basics, while the questions asked were on Differential Pricing
2. These emails were unsubscribed by TRAI, and 12 MM of those 14 MM emails weren't actually sent - probably because they went out to an empty mailing list.
3. The emails that were sent, were sent by misleading people into "supporting digital equality".
4. Facebook choose to represent and speak for all of the millions that had chosen to "support digital equality" which was questioned by the regulator.
5. Facebook didn't bother to inform the users that originally answered the "opinion poll" of "supporting digital equality" of the questions asked by TRAI even after having been asked to and extending the consultation deadline for the same.
6. Facebook choose to spend $44MM on this campaign in this process. (and an obviously unknown but really large sum for lobbying!)
I'm no policy expert or a strategy consultant, but if there ever has been an epitome for "shooting oneself in the foot", this would be it.
Thankfully, there is enough critical base of skepticism, over here. These kind of days, make me really proud as an Indian Internet user.
PS: On another note, welcome the entry of Netflix. Already became a member and enjoying it. I hope they remain very-very careful of not disturbing the sacred net-neutrality waters.
I applaud them catching onto this, because that the indian "license raj" has its downsides as well as upsides. This the first time I'm seeing the regulatory powers actively fighting the "the first hit is free" tactic.
The ideal approach would be to meter it as usage and freebasics getting a fixed bandwidth fraction until the pricing kicks in.
Also, it helped that the non-FB movement explained things better - the AIB ads to save the internet were hilarious.
 - http://indianexpress.com/article/technology/tech-news-techno...
There is some really, really delicious irony in that complaint... perhaps if Facebook paid for higher priority mail delivery, their emails would deliver. :)
I've always found these "email your congressmen!" campaigns to be largely ineffective and even counterproductive. The FCC opened net neutrality for public comment and received a similar response, i.e. hundreds of thousands of form emails sent to them. The emails bury any signal under gigabytes of noise. The only result is that regulators will ignore all emails, not just the form letters. The form letters are basically a regulatory DDOS campaign.
Add to that the fact that the campaigns are orchestrated by large players in tech, e.g. Facebook, and they lose all credibility. A megaphone and an agenda should not be sufficient tools to subvert public discourse.
Here are earlier communications between TRAI and Facebook
Are there alternative methods/technologies/business models, other than differentiated tariff plans, available to achieve the objective of providing free internet access to the consumers? If yes, please suggest/describe these methods/technologies/business models. Also, describe the potential benefits and disadvantages associated with such methods/technologies/business models?
A rather nice invitation for people who actually did want to write in in support of Free Basics.
Some of the arguments their sales rep puts up when you go to them to place an ad:1. We display ads even if the user does not have a need.2. The standard "Everyone is on facebook" which is not quite true in India.
I eventually stuck with google because I didn't feel my product which is targeted at farmers would be useful enough on facebook. This whole bullsh!t campaign is to get all indians on facebook so they can convince advertisers to sell ads exclusively on FB.
PS: freebasics may be an ad free platform but facebook on freebasics will definitely not be ad free.
But this should not distract from the fact the there needs to be a substantive debate about the merits/otherwise of their Free Basics initiative.
In a country like India where millions of people have ZERO access to the full internet, any effort that provides ANY access - however limited and curated - should not the shouted down by a vocal subgroup.
If after a meaningful debate the conclusion is that it is better to hope for eventual full access with zero current access(!) rather than instant limited access then so be it, but Facebook is not helping its cause with these stupid shenanigans.
Frankly, you look like you are bullies, and you've pissed off a lot of people by absolutely misleading them into signing your petition, to the point where you actually tricked many folks directly.
There are many, many people who distrust Facebook. I used to be on the fence, but now I see just what Facebook is like, I have to work out if I'm going to continue with an account. Frankly, it would be great if Facebook would wither and die on the vine (IMO, of course!) and a more ethical social media platform takes over. I can but live in hope.
The history of the Indian NN battle has been-
1) old TRAI head releases an implicitly anti neutrality set of consultation questions, very close to the end of the consultation period. (The underlying plan was to have the new rules passed without scrutiny, and the new head of the regulator would assume office in a month or two. All decisions would be blamed on the old head - SOP)
2) Nikhil Pahwa among many other individuals, including people on Reddit india start being vocal about it, (including an MP who brings it up in parliament)
3) these individuals coalesce into a rough group and using Twitter and in particular AIB's you tube video (Indian comedy group), get the message out. Millions of emails specifically answering the questions get sent to the regulator
4) what was assumed to be a slam dunk for telcos, turns into an actual consultation process, especially with the arrival of the new TRAI head.
5) committee is formed and consultation paper answers/counter comments are being taken into consideration for policy
Now comes a new paper - months after the previous NN movement. The topic is on differential pricing.
This time Facebook learns from the NN movement and opens with a rebranded Internet.org. Freebasics
Freebasics ostensibly is using the Facebook network to promote itself. Practically it's the same as Facebook using its network to promote a policy which it thinks is good for its users.
FBasics follows a huge online campaign with a marketing blitzkreig. (Not even kidding. There were more ads for Facebook than there were for popular movies at the time. Multiple hoardings and news paper ads)
In essence, Facebook learnt from the NN movement, and tried to create the same basic groundswell of support for its plans. It included utterly unethical onlir surveys which essentially asked "do you agree with saving people: yes/maybe later".
In sharp contrast - while FB Started strong, the save the Internet coalition had to do a cold start - they were/are never meant/intend to be a permanent NGO/movement. Nor are any members activists or professionals lobbyists.
So they didn't have things like opt in mailing lists to reach out for people. Nor had they anticipated the need to ask people if they could be contacted for future updates or requests.
Still, people once again coordinated, got the work done and got the message out - but a much smaller number than before (more arcane discussion topic than NN) and far less than Facebook managed to pump out.
The ability of Freebasics to leverage Facebook is hugely worrisome.
If it were not for a technicality - that some marketing honcho misunderstood the actual message that had to be sent - all of those messages sent to TRAI would be considered valid, and TRAI would have taken it into account.
A TRAI functionary said it directly day before yesterday - TRAI regrets that Facebook handled the issue the way it did because it was a great opportunity for people to let TRAI know what they really wanted. There's a sense of regret and disappointment at the regulator.
Facebook, learns. As will anyone who paid attention to this.
The next time Facebook, or reliance need to have hell with a consultation paper, and it moves into the theater of public opinion - they will act correctly.
They will answer the correct questions. They will message more people. They will improve.
In contrast, the volunteers who decided to take this issue up, won't exist for other issues or have the necessary ability and man power to match the big players.
This isn't a win. It's a warning.
Note/ Details have been subsumed into larger points, so specific dates and sequences may be out of order (such as conversion of Internet.org to FBasics)
We detached this subthread from https://news.ycombinator.com/item?id=10932362 and marked it off-topic.
Now we find out the unexpected corporate benefit of not having showrooms or physical locations by which to service clients - if they can't walk into your place of business and make a scene, just consider them a happy customer!
>In my experience, its a hobby masquerading as a company, and it can probably run as a hobbyist organization for some time.
This is the gut feeling I get with every over-the-top announcement by Tesla. Frequently I get down-voted here for griping about the linguistic flourishes in Tesla announcements, but I have my reasons. Sure, creating a neat innovation or clever door opening apparatus is impressive and all, and great for show, but the boring part of pulling it off reliably XX,XXX times is a totally different animal.
Also, a corporation where everybody is too on eggshells to point out that the boss lifted a customer's car and they're too scared to engage either the CEO or the customer is, for lack of a better concept, high-school-level drama lameness.
The fact that a customer-facing resource is airing information about internal process screwups directly to a customer is indicative that something is very wrong with service management at Tesla; this is a bush-league customer service mistake. Not only is the customer being informed that there is a apparently a massive issue with the pipeline for delivering product to customer, but they're also indicating that there's clearly nobody enforcing ownership or accountability for reported issues.
Instead of feeling like you are gambling on a potential upgrade my perception is now I'd be gambling on a potential loss / failure-to-deliver if I bought a used car from Tesla. It's only one datapoint, but it's the only data point I have.
Tesla's system for doing anything other than one car to one person at one time is not good.
When we initially placed our 100 unit order we got 100 confirmation emails timed suspiciously as though some poor person was entering the details one at a time. Their owner's website couldn't handle 100 unique vehicles tied to one user. When we took delivery we had to go through a bunch of human processes 12 unique times. The people seemed incapable of batching tasks like signing title paperwork. We went through the routine for each car twelve times.
All of their systems are built to do one thing very well.
So this guy asked Tesla to do something it is not built to do and sell him a loaner car. And the systems broke. All of them. In Tesla systems (operational and technological) everything is built to do one thing.
Yes, it stinks that some people at Tesla acted dumbly in response to this. But overall, don't forget. People are components of a system.
There are no programs in the Tesla system to handle any of the variables this situation threw at it, starting with what the guy wanted Tesla to do.
When you deliberately ask a system to do something it isn't designed to do you shouldn't be surprised when it breaks.
How on earth does this person's experience point to the non-dealership model being broken? Has this person never dealt with a shitty dealership?
Aligning yourself with the customer, then failing to provide a solution is a rookie move.
None of us were on the line with this customer; it's very possible he was prying for details and the CSR was trying to be accommodating with information because they weren't empowered to deliver a good solution.
I wonder how deep the similarities go, or if this story is just a really odd edge case.
The only concerning part in the article is the explanation by Kevin that the reason he couldn't get through to anyone wasn't because no one was available, but rather that they saw him calling and refused to answer because they didn't want to deal with him.
Otoh, I'd far prefer a company screw over a rich guy who is the very definition of an equal party to contract/informed consumer. The "buy here pay here" used car dealers catering to poor, relatively uninformed, and powerless consumers do things far worse than this as routine business practice.
Source: Owner of a non-dual-charger who has literally never missed it. I charge at 30mi/hr off a 220v dryer connection in my garage, what's not to like?
Of course I have doubts the reporter was accurate but I do believe this guys complaint sounds legit.
Tesla - if you're reading this, this guy's experience has convinced me; I won't be considering buying a Tesla again until you can buy one and drive off with it that day.
You cannot just NOT ANSWER a customer when you don't know what to do. You take it to your superior, who takes it to theirs, who takes it to theirs. Simply not answering the phone and hoping this guy would just be ok with losing 4 thousand dollars is certifiably insane, and whoever decided that should be the course of action should be fired. That is NOT how you handle a customer.
At the very least he should've been offered either the car as is with a discount, or a similar model for the same price. I'm sure he would've been happy with either option, but the Tesla customer service staff utterly failed him.
The existence of a sales channel is utterly irrelevant. He contracted with the sales rep to buy THAT car at THAT price, that's what was agreed and Tesla did not deliver. THEY need to make it work, not him.
What Tesla did wasn't right but very hard to have a pity-party for Marty Puranik.
Once Tesla ever moves into a large volume car I don't see how they will keep up the image they portray. Its not that simple. Whats worse here is that they have people who saw what was going on and it wasn't run up the flagpole fast instead they tried to up sell the customer!!! Get real guys.
Sorry your discount scheme didn't work out. Pay retail like the majority of the retail public.
And i smell a rat. Has anyone else who has purchased a Tesla had even half as many issues? I think this guy just worked things until he got some sales schmuck to make him an offer too good to be true, and then it was. (Note the several weeks to find a car. If Tesla sales are always months of back and forth they have serious problems.)
They have actual sales channels with real support, by sidestepping that and your stuck in weird internal processes.
IMO, paying full price and just buying fewer things massively simplifies most processes.
PS: That's not to say Tesla did a good job. Just that edge cases are often fragile and it's a good idea to weigh your time vs. the actual savings.
1. He posted this on his company blog. Not a personal blog, not medium but on a company blog. I think he's hoping to get some business from the exposure. I'd be planning to leave a company quickly if my boss posts personal rants that are not business related on the company blog.
2. He outed the one rep that told him the truth. He gave the date and the name of the rep that told him Elon was driving the car. Why would you do that?? Perhaps the reason why the other reps kept mum was because they knew Marty was a difficult customer.
3. He goes on about having a new baby, about how his electrician was calling to install some power-ups in his garage. These things are not relevant to the story. Simply tell your electrician the car has not arrived yet.
He also mentions that "In 21 years as a founder/CEO of my own company, dealing with Tesla has been the most bizarre and strange experience Ive had interacting with another organization"
That simply cannot be true.
Bottom line: Marty thinks the world revolves around him and is really upset Elon doesn't care about him.
I made a credit card payment last Thursday (1/14) in the mid morning, maybe 9:45 AM. Ideally this should happen pretty quickly, but I know in the US the infrastructure just doesn't match my current expectations. Seemingly there was zero movement until my credit account was credited the payment Friday afternoon, but the money was still in my checking account (no holds or pending transactions).
The weekend goes by, no movement. Monday (a bank holiday) comes and goes, as does Tuesday. I get a text message early this morning (~6:45 AM) that there was a large withdrawal from my checking account - the payment showed up! Nearly three full business days before the money was actually moved, and as of right now it's still a pending transaction.
The cynic in my wants to think that it's so people who don't pay attention to their finances are more likely to overdraw/double-spend the money, but part of me thinks it's just because the US infrastructure around banking and payments is so old it just isn't capable of anything approaching real time transactions. I would love to see a logistical/technical explanation of why it takes so long for this type of thing in the US.
There's a lot of good, and bad, information here. It's all impossible to tackle, so I'm going to point to the good things the US has going for it. (Worth getting other Fed Fast members and doing an AMA? Let me know):
1) As many mentioned, Same-Day ACH is coming (slowly but surely). Although not real-time, it will be a helpful stopgap as more real-time systems come online at financial institutions (see: #3). Combined with new Payment API Platforms, like Dwolla, many of us are enabling meaningful access and adding flexibility to an otherwise outdated platform. This will position platforms to take advantage of these new timeframes when they start arriving late 2016 and 2017.2) The major ACH operators, The Federal Reserve and the bank-owned The Clearing House, are both making significant investments in their tech stack to enable real-time capabilities (The Fed just inked a $17M deal with IBM to update their software and capabilities, and TCH signed a deal with UK Faster Payments Provider, VocaLink).3) The Faster Payment Task Force is an unprecedented market-led initiative, which despite all odds, that actually making meaningful progress on aligning the criteria and expectations for an interoperable real-time system. This is HUGE. Imagine cramming +300 lifelong competitors, embattled legal adversaries, entrenched interests, and long-standing rivalries in one room to debate the future of a trillion dollar landscape. Now imagine them to agreeing to create a better system. And I'm not just talking about improvements in speed, but better security, flexibility, and capabilities that could enable the next wave of commerce. Keep your eyes on https://fedpaymentsimprovement.org/, big news is coming in the next few weeks.
There's some interesting stuff I couldn't get to here https://www2.swift.com/uhbonline/books/hub/httoc.htm
And some other interesting stuff covering these standardshttp://project.i20022.com/the-standardhttp://www.c24.biz/c24-io-standards-financial-messaging-libr...
In the UK you can open a bank account for free, with no minimum balance and no stupid charges (i.e. ATM withdrawals, even at your own bank).
But an exception to incompetence is the EMV's decision in the U.S. to transition from swipe & sign, to chip & sign, and later (2018?) to chip & PIN. That really is just stupid. In the U.S. my debit card I never sign for whether chip or swipe, I always use PIN. In Europe, that same card when swiped I'm asked for PIN, if I use the chip it wants sign! WTF?
The delays of ACH seem less between friends at least if you use Venmo or Google Wallet since that's effectively private currency (within that system) and then only periodically use a "sweep" action if that balance gets higher than you want.
Most banks charge about $1 per Interact E-Transfer, but some accounts will get a certain number of free transfers.
With all that being said, I just received an email from TD Canada Trust (One of the major banks) stating that they are now imposing a $5 fee to cancel an E-Transfer (previously free). I can't remember the last time I cancelled an E-Transfer, but I'm still most likely going to close all my accounts with them over it. I find it absolutely outrageous that they would charge you $5 to cancel a completely automated process. The only explanation I can think of is to nickle and dime their customers.
The US Fed has been pushing for faster payment processing, but the big banks don't want it.
It's like the system acting in self-defense
Each of the trusted banks could be given access to the blockchain (rather than needing proof-of-work) and it would work as a distributed ledger without the need for VocaLink as a trusted intermediary.
Of course the question is how the costs of running VocaLink compare to those of running a blockchain type system.
Not a very compelling reason... storage is cheap!
This article describes the system for moving money within one country but similar mechanisms exist for moving money internationally. Imagine the counter party risk involved between banks who are governed by different central banks in different countries half way around the world. Bitcoin removes the counter party risk in transfers by making settlement instant per transaction instead of net daily settlement in traditional banking.
The reason this is important is that these transfers revolve around credit and when there is a banking crisis and everyone suspects everyone else's credit worthiness then payment systems can break down.
I don't think this is very sad. I don't use Paym. I think account number + sort code is easy enough to deal with and doesn't need to be changed.
I guess changing an account balance without a corresponding transaction would make automated checks (accounting verification) fail. But then we get back to the question of how any money got in the system in the first place.
So what happens if you want to send more? Do you need to make a personal trip to the bank or is there some other system in place?
1. A Party may, in formulating or amending its laws and regulations, adopt measures necessary to protect public health and nutrition, and to promote the public interest in sectors of vital importance to their socio-economic and technological development, provided that such measures are consistent with the provisions of this Chapter.
In other words, the TPP overrides any domestic laws protecting public health and nutrition, or socio-economic development.
That's not at all how the TPP works. The treaty doesn't allow foreign governments to "override" local laws, but rather allows for damage claims against the governments themselves if they enact and enforce laws contrary to the agreements in the TPP itself.
I'd really like the TPP annotated by legal experts. Instead, it's annotated by the CTO of Fight For The Future. I'm not sure that's a win.
"... the TPP elevates investor rights over human rights and democracy, threatening an even broader array of public policy decisions than described above. This, unfortunately, is the all-too-predictable result of a secretive negotiating process in which hundreds of corporate advisors had privileged access to negotiating texts, while the public was barred from even reviewing what was being proposed in its name.
The TPP does not deserve your support. Had Fast Track not become law, Congress could work to remove the misguided and detrimental provisions of the TPP, strengthen weak ones and add new provisions designed to ensure that our most vulnerable families and communities do not bear the brunt of the TPPs many risks. Now that Fast Track authority is in place for it, Congress is left with no means of adequately amending the agreement without rejecting it entirely. We respectfully ask that you do just that."
Cloudflare's captchas are nearly impossible to solve, which means that Tor users are effectively blocked from seeing the site. Would you consider using something other than Cloudflare to host the site?
No body has time for that. It's nice that they have pared this down to 31 different sections, but my guess is that they are not showing the full agreement here.
It would be much nicer if someone just dumped it all into a single PDF and HTML file.
Edit: Care to leave a comment rationalizing your downmods?
Let's stay with video games for a bit. What if we look at joy as 'seeing the world change', graded by the degree of indirection from our inputs (the longer it cascades, the more joy it gets). Maybe let it have preference for certain color tones and sounds, because that's also how games give us hints about whether what we do is good or not. Boredom is what sets us on a timer - too many repetitions of the same thing and the AI gets bored. Fear and disgust is something that comes out of evolutionary processes, so it might be best to add a GA in there that couples success with some fear like emotion. Anger, well, maybe wait with that ;-).
Edit: Oh, and for the love of god, please airgap the thing at all times...
Videos games are also essential for AI pedagogy. Creating Pac-Man agents in Stanford's AI class is a great example. Most players can barely get a "strawberry" but to see a trained agent mimicking human expert level play is eye-opening.
Quick reminder: Global Game Jam 2016 starts Jan. 29 and NYU is hosting its annual jam!
Video games are explicitly designed to test and fit within our bounds of conscious control and processing; particularly the retro games, but essentially all games in general have a very limited input control space (a couple keys or joysticks) and usually very rigorously defined action values. Moreover, these were designed by humans with very explicit successes, losses and easily distinguishable outcomes.
None of these descriptions fit the kind of control that an 'intelligent' system needs to handle. Biological systems do not have predefined goal values, very incomplete sensory information and most importantly control spaces that are absolutely enormous compared to anything considered in a video game. At any point in time the human body has ~40 degrees of freedom it is actively controlling - compared to ~5 in a serious video game.
I do not doubt that pattern recognition and machine learning techniques can be improved through these kind of competitions. But the problem is in conflating better pattern recognition with general intelligence; implying or assuming any sort of cost, value or goal function in the controlling algorithm hides much of our ignorance about our 'intelligent' behavior.
NLP to understand dialogue and actions that need to be taken based on what NPC's/quests/item descriptions say, strategies for several different enemies with different strengths and weaknesses, exploring the open world in a logical order.
When you think about the difficulties of such a loosely defined problem, it's hard to buy into the real-world fears of AI.
Language is quite complex and can't easily be beaten by hard coded algorithms or simple statistics. You can do some tasks with those things, but others they will fail entirely. The closer you get to passing a true turing test, the harder the problem becomes. It certainly requires human intelligence, and most of our intelligence is deeply rooted in language.
He mentioned games like Skyrim and Civilization as being end goals. But even a human that doesn't speak English wouldn't be able to play those games. Let alone an alien that knew nothing about our world, or even our universe.
"Made up minds:a constructivist approach to artificial intelligence" by Gary Drescher presents a small scale virtual world with a robot embedded in it that figures out the laws of its world by interacting with it, much like what a child does. Need more people thinking like this.
On boot, all surrounding data will be taken in, this step would give everything context. All new data coming in would be processed (referenced to original data to determine what is happening and actions to take), then clustered, and then updated to the original data set, dropping data from the original set determined to be irrelevant, and updating the context to give more relevant perspective of the new data coming in. (And loop)
Aside from using them as benchmarks, they way games are capable of simulating a world will probably be key in creating a true AGI. In the comment section of the article, we're already seeing some theories that involve video games not just a tests, but as a primary component of the intelligence architecture. Very exciting times!
On a related note, I think an official driving test simulation for all the self-driving algorithms, perhaps sponsored by the government, would be really beneficial.
At least motorcycle drivers who care are better drivers.
Maybe the article makes some valid scientific points, but I simply cannot go past this unscientific opening claim to a purportedly scientific article. Not just me, no peer-review journal will accept such frivolity. Passing on the article and hoping for better scientific writing in the future!
And here's why they aren't: First-person Shooters.
Why give AI something that's a goal that involves killing things that look like humans or animals for points? That's a recipe for disaster.
Breakout's not much better either. How often do you need to break a wall to smithereens with a ball? Never.
1) Work for one large client and essentially become an employee (consider this. a lot of startups pay good money for remote employees)
2) Work for multiple clients
Focusing on #2 here
Core rule: You want to be paid premium for quality and service.
Avoid marketplaces - it's very hard to compete on quality here.
Niche - the more focused you are on a (profitable) niche the better you can charge premium for domain competence
As thibaut_barrere mentioned - Build a brand - i would even go further - create an agency like brand. At the point is stopped saying "I" but said "we" i was able to charge more.
Dont charge by the hour but by the value - most developers charge their time - you want to charge the value you provide to the client. Read up on "willingness to pay"
Most important: Deliver as promised and always try to over-deliver in service, quality, etc. Eg try to understand why the client asks for features and not only what features she/he asks for - you might be able to come up with better solutions or anticipate future requests. Any successful project should usually lead to improved reputation and more projects and clients.
I came into Syracuse knowing nobody and nothing.
I had never done any app making as of January 2015. I had done some wordpress stuff, but just the basics.
And I had (and have) no CS degree.
I now make a living on contract work. I did it by going to local meetups and introducing myself as a freelance web developer. Nevermind that I hadn't done freelance web development ever. I kept going to meetups for month and still attend a monthly hacker meetup. I participated in hackathons without really knowing how to program.
But all along the way I met people more experienced than I am and picked up two clients along the way. I think one thing that I do differently to most is that I charge a high rate (I always quote $150/hr). I am willing to negotiate lower than that but its a starting point. I have been paid that in the past for less complicated work like hiring developers and being a project manager.
What am I saying? Your questions is what sites to use? Just one: meetup.com
Remote OK - https://remoteok.io/
Stack Overflow - https://careers.stackoverflow.com/jobs?allowsremote=True
LiquidTalent - http://www.liquidtalent.com/
Working Not Working - http://workingnotworking.com
Hired - https://hired.com/contract-jobs
Gigster - https://gigster.com/
Mirror - http://mirrorplacement.com/
Metova - http://metova.com/
Mokriya - http://mokriya.com/
HappyFunCorp - http://happyfuncorp.com
Savvy Apps - http://savvyapps.com/
Clevertech - http://www.clevertech.biz/
Workstate - http://www.workstate.com/
AngelList - https://angel.co/jobs
I know you're just asking for sites and not approaches to finding contract work, but getting in with a very promising early stage company through contract-to-hire [that allows remote] is probably the most sustainable way to go.
Doing one contract project after another at an hourly rate just doesn't scale well financially and finding a next decent client can be like pulling teeth.
Sites /can/ work (I know people who make a good living off certain sites), but nothing will beat self-managed marketing on the long run.
Feel free to email me (see profile) if you have specific questions.
In short, to answer your question, I never used any sites to find contract work. I got all my leads through face-to-face interaction with real humans in the real world, and a good deal of it came from word-of-mouth because of exceeding my clients' expectations.
Contracting sites marginalize developers and the type of clients who troll them are typically the kind who will try to squeeze as much work out of developers for as little money as they can. On top of that, developers are generally a pretty introverted crowd, so the number of introverted and talented developers who troll those sites looking for work is far greater than the number of outgoing, personable developers in your local area. Which group do you want to compete against?
Instead, I browse job boards and when I find an interesting role I contact the company. If they are interested in my background and the fit is right, I sell them on setting up a contract relationship instead of full-time employee. Sometimes it works, other times it doesn't. The important part is being honest that you are looking to work as a contractor, not an employee.
Job boards to consider: AngelList, WeWorkRemotely etc. If you're looking for a list of job boards (http://nodesk.co has lots and so does this article by teleport http://teleport.org/2015/03/best-sites-for-remote-jobs/)
These are informal "Can I take you out to coffee?" talks with people in your industry to see what they are working on, what is happening with them, what is going on in the industry. Every job I have ever gotten is through informal meetings with people I have met through my network (whether its your old job, your friends, parents, relatives, or other).
At the end of every one I ask: "Is there anyone else you think I should talk to?" and "Do you currently have any opportunities at your company for me?". Rinse repeat.I guarantee that after investing in 30 informational interviews you will find work.
For Germany, Gulp (www.gulp.de) is a very good site where you can actually find clients that are willing to pay a reasonable hourly rate (they even have a rate calculator on their site).
If you're in the UK...
I've been contracting about 3 years now and started it the simple (and probably dumb) way - stick a resume up on jobsite.co.uk, wait for agents to call. Lots will. Be nice to them on the phone but be firm about what rates and locations you're willing to work. You'll get lots of useless ones who haven't even bothered to read it, but no matter, you'll learn to filter them out pretty quickly. Remember the good ones. Rinse, repeat.
I've had two contracts now through reputation, which is quite nice, but getting contracts from previous workmates isn't a panacea. One of them was the most boring thing I've ever done in my life (worse than shelf-stacking in a warehouse) and I quit after three weeks because I was literally unable to complete the work it was so dull. I told the client that I was poor value for money and a recent graduate would be a better choice. The other one was good though!
Also, make sure you're prepared for some time off between contracts, it's pretty much going to happen.
some side projects I have done:
Have done more complex stuff but requires user to login.
I suspect the secret to contract work success lies in having really good networking skills and a Rolodex of contacts from having worked in a given industry and having a reputation as someone who delivers. If you don't have that then you would probably have better luck finding reasonable work by going to meetups or similar industry events to build a network of professional contacts. The only way I know of to do this online is to become a notable contributor to prominent open source projects and then use that to leverage paid work.
I've been consulting over a year (US-based, near NYC) and I've found plenty of very good clients (small and large) through freelancing websites.
Few loose guidelines I've used to help me with applying to gigs:
1) Evaluate if you think the person understands the value of the work, and only reply if you can somewhat-confidently answer "yes."
2) Reply to gigs that say "$5" or some other crazy low number, as long as they seem competent at explaining their project.
3) ALWAYS follow up with your past clients! Ask them for new work regularly.
If I'm looking for more cutting edge, interesting work I'll go out and find either a company, industry or project I'm interested in and try and insert myself into it somehow. Usually through meetups, over coffee or in one case just showing up (probably wouldn't recommend that, depends on the people - in my case it was 4.30PM on a Friday and I brought beer).
Usually I'll either do it gratis (if it's non-profit or public domain) or cut my rates if I'm learning on-the-job.
When I started pretty much all of my job offers and contracts came by word of mouth. I only had to kick down doors a few times before I had developed a reputation as a good worker. This involved cold-emailing, calling and meeting people at various industry events.
I didn't bid to low quality jobs and once I finish my job I offer them an maintenance contract outside upwork.
Depending on your living situation and time available Id recommend trying to establish your own identity so you dont have to go through a marketplace for contract work. Instead youll have the contract work come to you and not filtered through a middleman that would take a cut out of your work. I would never recommend someone go through fiverr, Upwork or these other marketplaces unless they were just moonlighting.
update: I post my pitch in the freelancer thread and potential clients contact me, for example https://news.ycombinator.com/item?id=9998249
Edit: I mean on HN similar to the first of month feature not a site (I know these are out there obviously).
The gist of it is, as many here are saying: Don't use marketplace sites. Instead show off your knowledge in a way that gets attention of potential customers, then they'll come to you.
I would recommend http://AngJobs.com
disclaimer: I run AngJobs, https://github.com/victorantos/AngJobs
I send a LinkedIn message to some of my contacts I'd like to work with, telling them it's been a while and that I'd like to get in touch, and offer them to take a cup of coffee with them this week.
During the meeting, tell them about your freelance status and that you're looking for work.
Here's my list of resources that I would be looking at ifI needed to start looking for a contract immediately:
- Authentic Jobs: http://www.authenticjobs.com/
- StackOverflow Careers: http://careers.stackoverflow.com/jobs?type=contract&allowsre...
- We Work Remotely: https://weworkremotely.com/jobs/search?term=contract
- Angelist: https://angel.co/jobs
- Github Jobs: https://jobs.github.com/
- Hired: https://hired.com/contract-jobs
- Toptal: https://www.toptal.com/ I'm a member of Toptal's network)
- Gigster: https://www.trygigster.com/ (haven't used it yet)
- Crew: https://crew.co/ (haven't used it yet)
- Approach companies at Meetups
- Meetups, meetups, meetups
- Pitch on forums
- Work with contract agencies
- Become a subcontractor
It also helps to work on branding yourself, blogging, and integrating into communities (like HN!). Generally, just becoming an authority on a topic and allowing people get to know you before they work with you helps a lot. Kind of like patio11 has done for himself around here. Then people start coming to you instead of the other way around.
I would also highly recommend looking at DevChat TV's Freelance podcasts for ideas, they're really great: https://devchat.tv/freelancers
By the same token, the ability of neural networks to learn interpretable word embeddings, say, does not remotely suggest that they are the right kind of tool for a human-level understanding of the world. It is impressive and surprising that these general-purpose, statistical models can learn meaningful relations from text alone, without any richer perception of the world, but this may speak much more about the unexpected ease of the task itself than it does about the capacity of the models. Just as checkers can be won through tree-search, so too can many semantic relations be learned from text statistics. Both produce impressive intelligent-seeming behaviour, but neither necessarily pave the way towards true machine intelligence."
So true, and this is why I don't listen when Elon Musk or Stephen Hawkings spread fear about the impending AI disaster; they think because a neural network can recognize an image like a human can, that it's not a huge leap to say it will be able to soon think and act like a human, but in reality this is just not the case.
"This is all well justified, and I have no intention to belittle the current and future impact of deep learning; however, the optimism about the just what these models can achieve in terms of intelligence has been worryingly reminiscent of the 1960s."
From what I've read and seen, the leading people in the field (Yann LeCun, Hinton, etc.) seem to be very aware that the current methods are particularly good for problems dealing with perception but not necessarily reasoning. Likewise, I have not seen many popular news sources such as NYT make any crazy claims about the potential of the technology. I hope, at least, that the people who work in AI are too aware of the hype cycles of the past to get caught up in one again, and so there will not be a repeat of the 60's.
"Deep learning has produced amazing discriminative models, generative models and feature extractors, but common to all of these is the use of a very large training dataset. Its place in the world is as a powerful tool for general-purpose pattern recognition... Very possibly it is the best tool for working in this paradigm. This is a very good fit for one particular class of problems that the brain solves: finding good representations to describe the constant and enormous flood of sensory data it receives."
It's not hard to see that the reason NN are becoming the prime candidate for AGI, is because of their inspired architecture based on biological neurons. We are the only known AGI, therefore something similar to the brain will be producing an AGI. NN at least mimic the massively parallel property of biological neurons. And if we're optimistic, the fact that NN is mimicking how vision works in our brain, might mean that we are at some point in the continuum of the evolution of brains, and it's a matter of time until we discover the other ways brains evolved intelligence.
What keeps me optimistic is evolution. At some point brains were stupid, and then they definitely evolved AGI. The question is how did this happen and whether or not there is a shortcut, like inventing the wheel for transportation instead of arms and legs.
I feel like the gist of what current neural nets can do is "pattern recognition". If that's fair, I also suspect that most people underestimate how many problems can be solved by them (e.g. planning and experiment design can be posed as pattern recognition - the difficulty is obtaining enough training data).
It's true that we're most likely a very long way away from general AI - but I'm willing to bet most of us will still be surprised within the next 2 years by just how well some deep-learning based solutions work.
Here's the important difference about NNs. They are incredibly general. The same algorithms that can do object recognition can also do language tasks, learn to play chess or go, control a robot, etc. With only slightly modifications to the architecture and otherwise no domain information.
That's a hugely different thing than brute force game playing programs. Not only could they not learn the rules of the game from no knowledge, they couldn't even play games with large search spaces like Go. They couldn't do anything other than play games with well defined rules. They are not general at all.
Current neural networks have limits. But there is no reason to believe that those limits can't be broken as more progress is made.
For example, the author references that neural networks overfit. They can't make predictions when they have little data. They need huge amounts of data to do well.
But this is a problem that has already been solved to some extent. There has been a great deal of work into bayesian neural networks that avoid overfitting entirely. Including some recent papers on new methods to do them efficiently. There's the invention of dropout, which is believed to approximate bayesian methods, and is very good at avoiding overfitting.
There are some tasks that neural network can't do, like episodic memory, and reasoning. And there has been recent work exploring these tasks. We are starting to see neural networks with external memory systems attached to them, or ways of learning to store memories. Neuroscientists have claimed to have made accurate models of the hippocampus. And deepmind said that was their next step.
Reasoning is more complicated and no one knows exactly what is meant by it. But we are starting to see RNNs that can learn to do more complicated "thinking" tasks, like attention models, and neural turing machines, and RNNs that are taught to model programming languages and code.
We shouldn't forget that the mind/body split is a wholly artificial construct that has no basis in reality. The brain is not contained in the head. The nerves running down your spine and out to your toes and all over your body are neurons. Exactly the same neurons, and directly connected to the neurons, that make up what we think of as the separate organ 'the brain'. They're stretched out very long, from head to toe, sure, but they are single cells, with the exact same behavior and DNA, and there is no reason to presume that they must have some especially insignificant role in our overall intelligence.
Then there is the fact that it is probably reasonable to presume that a machine which has human-level intelligence will not appear overnight. It would almost necessarily go through long periods of development. During that development, when the machine begins to behave in ways the designers are not able to understand, what will be their reaction? Will they suppose that maybe the machine had intentions they were unaware of, and that it is acting of its own volition? Or will they think the system must be flawed, and seek to eliminate the behavior they didn't expect or understand?
I have a hard time imagining that an AI system will be trained on image classification and one day suddenly say "I am alive" to its authors or users. If it instead performs poorly on the image classification because it is pondering the beauty of a flower in one of the images, what are the chances that nascent quasi-consciousness would be protected and developed? I think none. We only have vague ideas about intelligence and consciousness and our ideas about partial intelligence are utterly theoretical. Has there ever been a person who was 1% intelligent? Is mastering checkers, or learning NLP to exclusion of even proprioception 1% of human intelligence? You optimize for what you measure... and we don't know how to measure the things we're looking for.
On the other hand there are reasons to be optimistic. Human brains are built from networks of neurons and the artificial neural networks are starting to have quite similar characteristics to components of the brain - things like image recognition (https://news.ycombinator.com/item?id=9584325) and Deep Mind playing Atari (http://www.wired.co.uk/news/archive/2015-02/25/google-deepmi...)
The next step would may be to wire the things together in a similar structure to the human brain which is kind of what Deep Mind are working on - they are trying to do the hippocampus at the moment. (https://www.youtube.com/watch?v=0X-NdPtFKq0&feature=youtu.be...)
Also we are approaching the point where reasonably priced hardware can match the brain, roughly the 2020s (http://www.transhumanist.com/volume1/moravec.htm)
It'll be interesting to see how it goes.
Many people got dissilusioned with classical AI because mathematical logic (inference engines) would not scale to 'strong' AI.
Hofstaedter says that most concepts handled by Humans do not fit into clear cut onthologies one to one. Instead each higher order concepts are created by finding analogies between objects or simpler concepts, and by grouping these similar concepts into more complex entities.
I have a summary of the book here http://mosermichael.github.io/cstuff/all/blogg/2013/10/15/po...
Whenever I've tried to extract data like that inside Spiders I would invariably (and 50,000 URLs later) come to the realization that my .parse() ing code did not cover some weird edge case on the scraped resource and that all data extracted was now basically untrustworthy and worthless. How to re-run all that with more robust logic? Re-start from URL #1
The only solution I've found is to completely de-couple scraping from parsing. parse() captures the url, the response body, request and response headers and then runs with the loot.
Once you've secured it though, these libraries look great.
PS: If you haven't used ScrapingHub you definitely should give it a try, they let you use their awesome & finely-tuned infrastructure completely for free. One of my first spiders ran for 180,000 pages and 50,000 items extracted for $0.
Very glad to learn about this site Scraping Hub. Keep the war stories coming. It's technologies like these that brighten up our otherwise drab tech careers and help some of us make it through the day.
Am I the only one who dislikes Scrapy? I think it's basically the iOS of scraping tools: It's incredibly easy to setup and use, and then as soon as you need to do something even minutely non-standard it reveals itself to be frustratingly inflexible.
 - http://slides.com/escherize/simple-structural-scraping-with-...
disclaimer, I wrote it. No crawler yet, though that's next after a new website.
Now, the cost of running the postal service is the cost of supporting the whole network of delivery vans, sorting offices, collections and so on, including all the staff.
The cost of supporting the network is the same whether it carries a very large number of packets or zero packets (up to the point where you have to add infrastructure to cope with extra traffic. Yes, like the Internet).
This economic structure means you can carry traffic at a marginal cost if you know the cost of supporting the whole network is covered.
All of which was worked out before Sir Rowland Hill launched the Uniform Penny Post in the UK in 1840. This disrupted the whole messenger business (where you paid for distance traveled) and was widely copied everywhere else. After that, nations formed a Universal Postal Union on the basis that "we'll deliver your letters if you deliver ours" (like the Internet).
In the early days of the public Internet (early 1980s), I used to explain how it worked by comparing it to the penny post. It's nice to be able to do the reverse ;-)
 http://fortune.com/2015/03/11/united-nations-subsidy-chinese... http://cep.lse.ac.uk/pubs/download/cp396.pdf
I asked around and it turns out that postal services in rich (EU) countries have a special, heavily sponsored rate for 3th world nations. This wasn't a problem with the occasional letter from Afrika, but the post services didn't really saw this coming: Mass free shipping from china. Apparently the EU wanted to get out of this, but china refuses (and seems to be able to, for now).
Enjoy it while it lasts :)
for reference 6RMB=0.91 USD
Larger parties are able to obtain even lower rates.
*I say originally, as my experience with Amazon has been on a downward trend for at least the last two years.
In other words, for the business selling you the item, shipping is free. It's the Chinese citizen/taxpayer that foots the bill. This shouldn't be surprising since China, eg, is notorious for devaluing their own currency as a means of boosting exports. This is effectively a tax on the greater populace for the benefit of their manufacturing sector.
Amazon does not like this at all and are currently lobbying the US Govt to stop it.
Some portion of the people who buy one 3-cent button are going to come back in a couple weeks and order forty thousand 3-cent buttons. Free, fast shipping on the first button is to entice you to use that button for your design instead of someone else's.
If the item is small enough, it usually arrive in a padded envelope in my mailbox. Bigger items I may have to pickup at the post office. Most often, it arrives within 2 weeks.
I suspect that postal services in different countries have some kind of peering agreement. That they simply assume that the amount of mail would be pretty much the same in both directions and because of that, they don't really charge each other.
If something costs $99 it isn't free. Free is no cost. $99 is not no cost, therefore $99 is not free. Just as text messaging isn't a "free" part of your $30/month cellphone plan, it is an included part however.
It has to be this. I work at an E-commerce company. We often do the math on just exactly which items make a profit for us vs which don't. It's not always obvious until you really dig into the exact costs of handling the items. I believe Alibaba has a lot of 3rd party resellers, it's possible they haven't done the math themselves.
This means subsidizing purchases to draw users in.
I just considered buying that 3c button just because of how cheap it was, and this would have required me to register with them, and enter my cc number. This means that purchasing stuff from them in the future would be easier and I would be more likely to do it.
This could explain the free shipping on the China leg, and then as was mentioned there is a treaty that allows for nearly free shipping in the US by Chinese companies.
That's amazingly fast - I don't understand why it's faster to send things from Hong Kong/China to the US than from the same to Australia. I've had things from AliExpress take upwards of 35 days to get to me (in a "metro" area of SE Australia)
No idea how it is possible. The bubble wrapping protecting it from shipping is worth more than what I paid.
China must simply have cheaper shipping rates, obviously not 3 cents, but probably a fraction of what you are used to spending.
The plane was already flying from China to wherever, filled with AliExpress merchandise. They may have paid for the whole plane instead of just by weight. So adding this product didn't add any tangible extra weight or cost.
Then the mail carrier or whoever is already going on that route. I doubt he gets paid per package, so adding another small package is no big deal and doesn't add extra time/cost.
Unless someone is ordering 1million buttons, then the weight starts to add up. But then so does the cost of the buttons, which at that point would cover the shipping.
I'm generalizing a bit since I don't know the specifics of the shipping industry. But this makes sense to me.
A plugin for vlc that can show you the name of any actor when you ask would be really fun!
I ran it at our free, ticketless convention called Makevention (bloomington, IN). Estimates were that 650-700 people showed up. My tracker counted 669 uniques, which I think is spot on.
I also wrote mine for privacy in mind. The database was a KNN on a perceptual hash of the face. The data that was stored was only a hash and could only verify a face: it could not generate the face from the hash. Considering the application (Maker/Hacker con) I wanted to be sure that this was the case. (The data only resided on that machine, and it's wiped now.)
I've halted work on the gui version of it. Now I want to make it into a client/server, where the clients are RasPis (or other cheap compute with camera) and the server is whatever good machine you have. Initially, I'll reimplement the same algo, but I know that KNN has exponential time/cpu requirements the more samples I get.
There was a nice book, "Database Nation", that described a case of scanning licence plates of cars crossing a bridge to see who's at home and who left for work. Made burglaries a lot easier.
And now we do that based on faces ... nice .. /s
Oh okay. Surely this will stop any bad actors.
I think what is unfolding is a shift away from consumer businesses and back to boring business to business services. The latter tend to have more predictable cash flow (which ironically is a handicap when VC money is flowing to "viral" consumer businesses that can paper up their growth).
B2B services whose customers are not linked to the local tech economy should do just fine. It might be harder to raise money for a while, but that's good too, it'll weed out the weaker companies. OTOH, consumer companies that already have ten years of rapid growth priced into their valuation are pretty screwed. Hard to see where they avoid some really painful and damaging adjustments.
And a word of advice for workers. If you are at all unhappy with your job, or suspect that your company is vulnerable to a downturn, now is a good time to start looking around for something you'll be ok with for a couple years. Once companies start laying people off in larger numbers, it will get ugly (maybe not 2000 ugly, but it won't be fun).
"Q4 2015 pretty much the same as Q4 2014, the sky is falling!"
> "But when you have a lot of money chasing all these great ideas, and you combine it with the fact that entrepreneurship has gotten sexy in the last few years and become the in thing for a certain crowd, what you end up with is a huge number of people starting companies who have no business at all doing that."
I've said this to tons of people privately and publicly: entrepreneurship has become the new "I'm working on a novel". Lots of people have good ideas in their heads. That doesn't mean they should be building businesses on them.
Some people are just not cut out to be entrepreneurs. And once the money dries up, these will be the first ones to go.
The rest will continue building businesses because they can't really think of doing anything else.
That the technical companies in the bay area seems to be isolated from the hardships of the rest of the economy doesn't make it a bubble, it's simply a place with a different economic focus with different economic realities.
I swear people around here actually sometimes seem to wish bad things would happen, as some kind of schadenfreude.
We see these high valuation numbers being lobbed around as some kind of insult to good sense, but we're talking about just a few companies and in the grand scheme of the size of the economy in San Francisco, Palto Alto, Mountain View and San Jose these numbers are a tiny fraction of the overall system.
The reality is that software is still eating the world, and more change and value to be derived therefrom is still to come: automated vehicles, robotics, smart appliances, VR, and a myriad of far less sexier technological innovations with sound economic value creation.
Now for $10 a month, Linode provides me with the VPS box, I get 2 TB transfer, 125MB port speed, and the plan is not yearly or monthly or even daily, but hourly.
There are two app stores I can put apps out with, with a variety of monetization schemes. I know for Android that over 1 billion people use their Android device at least once a month. There is also the web.
Companies have gone from renting offices to subleasing to co-working to virtual offices.
If you can program (or can install software with minimal programming skill and have some creative ideas and will hustle), there has never been a time when fundraising has been less important, because you do not need money to get your product to market. This being the case, worrying about getting a job and layoffs makes less sense. Because nowadays you don't need an office, a shrink-wrapped software deal with computer stores, a handful of leased colo'd servers and IT team to support them. You can start with much less, get to where you're making a minimal living, and then go from there. After that you can take a job, or take seed/VC money if you want, but you don't have to. I have seen this come to pass, and I have heard many luminaries in Silicon Valley say the same thing. Of course, in good times it is less work and easier to get a job making low 6 figures programming, than it is to make $30k a year on your own bootstrapped business, never mind pushing that up to where it grows to $100k. It is a lot of work, as many have said. It is doable like never before though.
(Yes I realize this is overly simplified and bleak, that's the point.)
I think this won't be an acute bubble for incumbents but for early stage startups that need funds badly to keep the short runway clear.
Sure, it's always different this time for a CEO who needs to keep hopes up for investors. Sales is crashing with the deflation that is coming. And also overvaluations are followed by undervaluations when the interest rates increase.
On one hand, I feel sorry for all those software devs (myself included) who might be out of a job in the next few months or whose salaries will start shrinking.
On the other hand, it will be nice to watch some over-funded startups crumble to pieces and make room for more deserving (bootstrapped) newcomers.
With the Fed raising rates, VC's already starting to hedge their bets, and friends of mine leaving startups to go back to stable corporate gigs, I see a shuffling of the deck, but nothing too major.
I feel like this is the normal eight year tech cycle that is just coming back around again:
- 1992 recession and crash
- 2000 recession and crash
- 2008 recession and crash
Overdue, if true. If not, I guess we'll keep waiting for the next hangover to catch up to us.
The tech startups that involve actual physical products with manufacturing, distribution, shipping, retail placement, etc are another story.
It seems like we would be well served to have terms to differentiate for sake of headlines like this.
I wonder if this means companies have gotten wise to the bad deals they were signing just to get a high valuation and are agreeing to lower valuations but not giving up preferred stock that is so powerful. I know a few smaller companies that have done this and prefer to not make headlines with sky high valuations that everyone knows are meaningless.
This is a good thing.
The article is correct that it is not silly as the late 1990s. Many startups now actually have working products, revenues, and sometimes profits. Many 1990s companies lacked those.