Young people have a greater percent of their experience on the most modern platforms and are unlikely to "write FORTRAN in any language" (JS, Ruby, etc).
We've also discussed how homogeneity is valuable to an early startup. Having everyone be culturally similar may allow faster pivoting and interpersonal comprehension.
Finally, tech startups today are largely focused on the "exit": it's not about building and maintaining a product in the long term. Older employees have the planning, analysis, and maintenance experience to establish a product vision for the next decade. But the startup founder (by and large) doesn't want to think of a product over a decade; it's all about MVP, pivots, and fast exits.
Not that any of that is bad, it just doesn't fit well with the average older employee.
I'll admit when hiring I have a little bit of the opposite bias. Very young employees can sometimes be too aggressive about "what is this legacy garbage? we should rewrite it all in RoR and JS". Hey man, we're still maintaining COBOL apps here, slow your roll. It's all about long term planning and maintenance for us.
I've definitely experienced ageism but I suspect that it is rarely institutionalized. Instead I think that people like me are relatively rare. I have no real interest in management or another career--I like being a developer. That means when I show up for a job interview, I don't exactly fit in since it is likely that few of the candidates are my age or older. Sometimes I can overcome this and other times I'm not really given a chance.
I mostly work as a contractor so I have frequent job changes and I haven't found it to be that much of a problem. It is just another hurdle that has to be overcome. The group I'm working in now, I'm the oldest developer by at least 10 years.
>Eight of the companies, the study said, had median employee age of 30 or younger. In comparison, the Times reported, the median age of the American worker was 42.3 years old.
Ageism is not the only explanation for this discrepancy. The software industry has exploded in growth in the last twenty years. Most other industries have not seen the same rate of growth. When a labor pool for an industry is strained, it sends a signal to young people to pursue a career in that field. As a result, the labor pool is filled by proportionally more young people.
Eventually the growth of an industry will cease. Then less young people will pursue it as a career path, and the median age of laborers in that field will rise.
The problem is when a candidate who has better qualities gets passed over for a younger candidate based on prejudices based on the previous generalization.
Say you want a skilled programmer who is going to work for a certain wage and put in an amount of overtime, and you pitch it to a fresh college grad, and a 35 year old who just got downsized out of a job; when the interview concludes, it's obvious that they are both willing to take the wage you're offering, and the 35 year old is far more knowledgeable.
If you take the kid because you think the older guy might not be as willing as he claims to put in overtime, or because you think he might be too set in his ways, or because he might be too old to match the cultural fit... That's a problem.
People can hire young people because they're cheap. Especially startups, who maybe can't afford to pay for experience. Likewise, someone with a mortgage and kids might be less willing to look for a job with a risky business. So the average age might drop in those kinds of businesses and that's OK.
The problem is when you see the effect and invent the cause. "More young people are in successful startups, that means avoid old people if you want to be successful".
Instead it could be just "frugal startups are more likely to succeed, so don't spend too much on your labour" in which case given two candidates willing to take the same wage, the one with the better skills should win, regardless of age.
> The average age of a successful entrepreneur in high-growth industries such as computers, health care, and aerospace is 40.
> Twice as many successful entrepreneurs are over 50 as under 25.
> The highest rate of entrepreneurship in America has shifted to the 5564 age group, with people over 55 almost twice as likely to found successful companies than those between 20 and 34 in fact, the 20-34 age bracket has the lowest rate of entrepreneurial activity.
If ten times as many people over 50 than people under 25 try to start businesses, then the statistics strongly favor the under-25's. These statistics don't mean anything unless you know the number of people in each age group who start businesses.
> 75% have more than six years of industry experience and 50% have more than 10 years when they create their startup.
What does this even mean? Years worked is not the same as having useful experience.
The article would be much stronger if it showed evidence that older workers were more productive, more likely to succeed at starting a business, or were in some other way undervalued by the current job market.
Furthermore start-ups also implicitly prefer "just good enough" in order to get your company up and flying enough to quickly accrue investor interest, whereas stability forms the bottom-line of our elder engineers -- and stable systems require far more time than some investors are comfortable with.
Even worse, there is the management aspect. Older people have very different management styles, and throwing caution to the wind is definitely not among their managerial toolset. If your company has older people in positions of power then it definitely scares vast swaths of people away -- from eager investors looking to turn a quick buck and energetic young engineers looking to for autonomy to work without anyone second-guessing their decisions.
Finally, the saying "old is gold" applies quite well, however that gold needs regular polish to remain shining.
Starting off with some ageism and general cluelessness of your own doesn't help. If they wanted a "young engineer" to generically optimize data storage, I would be at least half-competent, and could give them the names of several engineers far younger than 77 who would be excellent at such work in general.
What neither I nor virtually any other engineer, young or old, would know, and what the engineer who helped build the thing would, is anything about the specific equipment and code in use on Voyager. This is specialized knowledge, just like a lot of the knowledge young 2013 NASA engineers have.
It is equally as beneficial to employ those engineers to modify the systems they built as it was to employ Lawrence Zottarelli to modify the system he built. Age is unrelated.
An unwillingness to train does exist in the more seasoned veteran demographic; but it's just as bad when you get young developer that is a evangelical about their particular niche.
I will say this though - I have to constantly step my game up to stay relevant. At 50 I will be expected to not only know programming very well, but the industry as a whole including analysis, planning, finances, management, etc.
1) Move to management to continue advancing
2) Find one of the very few large to midsize companies that has a long-term developer career track comparable to their executive track
3) Start you own thing and exploit the fact that you are more productive than younger developers, and control the purse strings
Large and even mid-size and event ostensibly tech companies still, for the most part, view developers as an R&D cost center. They look at cheap development resources, either young kids or foreign, and think "For the price of 1 developer, I could have 10!"
Now they have 2^10 problems.
1. They have a family with responsibilities, sick kids, and dance recitals, etc.
Leads to, less professional growth outside of work, more personal days, more sick days, more personal phone calls during work.
Once comfortable, they rarely leave the job unless forced out.
2. They attempt to solve every problem with the same set of solutions. Few attempt to find new technologies or try different things.
The implementation may not be the best one available that will set up the business for future success.
Solutions are generally predictable.
3. Lack of motivation to prove themselves.
The project deadlines are just met. They do just what's asked of them and not more.
Deadlines are met. They don't try to tackle mroe than they are capable of by expanding the problem domain.
I could be totally off, but from what I know and have seen this seems to be the trends.
Anyone who is on either side of that equation is fooling themselves on the reality of engineering and team dynamics.
Of all the big young tech companies, I would think that Google would be the one to benefit most from older, experienced employees (besides the group of all-stars it already has, such as Norvig)...Google's business encroaches on a lot of other domains, and domain knowledge is something that (usually) gets better with age. Perhaps in 5 to 10 years, there will be a bigger group of 40+ yr old professionals with enough tech experience/savvy to be more obvious assets to tech startups.
Maybe ... or their incentives aren't aligned with yours? We used several different Indian out-sourcing shops at various points and if there was the slightest ambiguity in a specification (and how do you write one without any ambiguities?), they would (purposely?) do what you'd least expect. As a contract programming shop, this led to the most billable hours.
We later purchased an a company that included an in-house division in Bangalore. These programmers were therefore employees of the same company that I was and therefore motivated by the same factors (successful projects meant more work ... unsuccessful projects meant looking for another job). In general, the in-house Indian programmers were competent other than slightly inflated grades (a lead JavaEE programmer with 2 years of experience?).
So from my experience, there are some brilliant Indian programmers, and some worthless ones, with the vast majority falling in the middle ... just like here.
P.S. I'm in the US, but I imagine that "here" is valid for many values.
First, as the article says, people lack motivation.
In India, the decision of choosing IT/CS does not come from children, it is often influenced but family or friends. Tell anyone that this field pays well and they will be happy to join it without second thoughts. This happens with majority of people.
Second is education system. Article clearly states:
I don't blame the quality of education here. That's a common excuse. If a person is motivated, he'll surpass that constraint.
"Why are you building a 2 page project? Others are making big projects, expand it and make something big!"
I tried to explain that it would take time to nail down UI and design and it was good enough for a single person project. But teacher just didn't understand. And in the end, I ended up making a "Learning Management System" using WordPress.
And in final viva, the questions asked were these:
"What is SSL?""What does "collapse" button do?" (This is a standard WordPress button, just hides the menu)
Since number of students was large, no time was given to explain/present it! This system of having such people as teachers kills any motivation one has. I can't say about others but for me, spending 6 hours in an environment like this and then staying motivated about programming is very very hard!
Add to this that you need to be an expert in Physics, Chemistry and Maths to get to the top institutes (IIT, NIT) even if you want CSE/IT course. This filters majority of people with interest in programming. I have been programming since 9th standard but I wasn't good in Chemistry and Physics, so I couldn't go to a good institute.
Quality of education is a big factor. The education system is blurting out engineers who are experts in cramming and lack any interest in programming.
> And you expect the quality of $200 per hour experienced developer.
> Stop having crazy expectations.
And writing good quality code will not give you good rating, will not give you promotions. These firms make profit when they have more headcount. A good experienced coder becomes a liability because he/she has to be paid more and therefore the firm makes less money off him/her. To make profit these firms need more fresh out of college kids, who can be paid peanuts to make more profit.
There's two sides of this: If you don't get paid a lot, you probably are demotivated and yes, you can choose to deliver below average quality. However, that way, it is very unlikely that you will every get paid more or get better projects. Hence: You're stuck at where you are.
If you however sacrifice yourself a little bit, deliver great code, put in the extra effort, you probably negotiate your way up.
At least, that's how I got from an unexpierenced webdeveloper to owning a business and having a great staff <3.
And, y'know, even the best organizations make mistakes sometimes. If a company tried outsourcing to India once, for good reasons, discovered the results were bad and has sworn off ever doing it again, it could be there's nothing wrong with the company - it just made a mistake. Less than that in fact - it did a worthwhile experiment and got a negative result.
It seems the main issue is deciders in the west are happy to find cheap outsourcing companies in India. Their pick goes to the company with the lowest hourly rate. What they don't realize is that the good programmers move very fast in better paid position elsewhere... letting you work with the less qualified people.
Welcome to the cost "efficient" world of outsourcing!
However, many of the times I have encountered this there was a desire to do more outsourcing where the quality of prior work didn't merit the expansion. More often than not, the one insisting on the outsourcing had some sort of personal relationship with the group the outsourced work was going to.
This happened with a couple of different regions/nationalities.
Why relevant to this article? It's not just about pay, but specifically with outsourcing, you need to look at whether or not the decision makers are listening to feedback about the quality of work produced or insisting on steaming ahead regardless.
I used to work for a MNC who had an office in Bangalore and had good relations with most of the senior staff. After a while I joined a start-up (which kind of became successful) and my salary went high pretty quickly. Fast forward 5 years - I get a call from the GM of my business unit (who had become the VP by then) who asked me to join the company back and offered me 20Lpa(Lakhs per annum) when I was already working for 24Lpa. He just couldn't reconcile with the fact that my salary had become more than the people who went for on-site visits by staying in the same company. I had to politely refuse.
Replace the word developer with its equivalent in any 'talent'-related job and it still applies, e.g. Radio, Voiceovers, Writing, Editing, to mention a few.
If you can't value my time, I won't find enough motivation to value yours. You pay peanuts, you get monkeys. :/
That said, I heard tell of a piece of code where "if n < 20" was implemented as a series of 20 "if n = 0", "if n = 1" conditionals, which beggars belief.
According to tptacek, that's in the works, but seems like that didn't make it into this release.
Is that man teaching because he needs a job, or because he enjoys teaching? Or does he just want to prey on the girls?
Is that woman developing video games because she's a good programmer, or is she just desperate for male attention?
The biggest difference is that fewer people see this as a problem when it's affecting men. Culturally men are expected to bear their problems instead of lament them.
- lower wages because of higher competition (the employer's incentive)
- more "attractive" workplace wink wink (the employee's perspective)
I know it's not politically correct to say that, but don't shoot the messenger.
I don't think men feel the need for more "female perspectives" in programming. Especially as they wouldn't even know what those would suppose to be.
Women (feminists) push for more women in tech because they have seen some people become rich via tech jobs and feel left out. They don't push for more garbage women and so on.
I think this is the appropriate answer to these guys talking about this kind of a topic.
I think, and with respect to original developer, this is more a feature then an app and probably would help if it would be developed further to target more specific problem/group.
Having said that, I wish you best.
I suggest adding more value or lowering the monthly price.
Overall, very cool.
If I manually save that content to disk then any DMCA take down doesn't affect the content stored on my local hard disk.
But to trust that something like this to make a permanent copy of stuff I'm linking to, I'd need to know a bit more about them. Else this is effectively like using a link shortener -- a single point of failure.
Then it doesnt work. Stuck at "caching page".
I hate you.
It seems like a bad idea, but they do have a point. Maybe when referencing a link also make a small note to one of these archive sites?
Would it be a better option if the "permanents" were shared across p2p/bittorrent and every unique item had at least 10 shares distributed across the globe, maybe a max of 20. When one share host goes down, it just picks up a replacement.
I'm probably responsible for the thinking that annotations are only needed for "top levels and function parameters". I usually forget about the other ones, but I think those two are the most significant.
I don't really see the big advantage for these cross-plaform GUI frameworks, anyway. Trying to force a standard interface across diverse platforms means coming up with odd idioms or patterns to achieve a kind of artificial homogeneity. Better to concentrate effort on making the business logic cross-platform, IMO.
His version of scoped_ptr seems to fake rvalue references without actually using rvalue references, but does so in a copy constructor. IMO this is a bit bonkers. If you're not going to use rvalue references I think move semantics is better done from a method, not a copy constructor.
Still looks like a handy library for wrapping things that are otherwise not portable.
Edit: I was mainly basing that comment from looking at juice_core... There's a crapton of other stuff too. Impressive for a one-person work.
I've obviously seen wxWidgets, but I'd even be interested in commercial offerings.
There is strong leadership from Jules on the direction of the library which is generally a very good thing, though it does mean that sometimes there isn't much room to budge on controversial issues. Font rendering is one aspect that several have battled with for a while, I've struggled to get good crisp smaller fonts without resorting to using freetype. Jules argument seems to be that small fonts shouldn't be used period, therefore the library wont render them well (I think there are technical as well as philosophical reasons for this, particularly on OSX). While I agree they should generally be avoided, there are certain situations where this isn't the case (reproducing an exiting GUI for a client, fitting non-critical text in when screen real estate is at a premium etc).
Overall I would certainly recommend for anyone starting out in audio development, but be prepared to fiddle around with fonts; I'm not so familiar with the non-audio parts of the library.
They really need a lighter color for the text on their website -- tough to read.
The seemingly only advantage is that Dropplets doesn't require a database, and its landing page is amazingly beautiful, clear, and to the point (though Ghost's "features" page is slick as well).
On a more technical note, click events seem to be propagated up the player on the landing page, closing it when toggling HD for example.
EDIT: the installation part in the README.md needs some extra info. Like it requires PHP and some file/directory permissions.
But who knows, maybe Dropplets is the one.
It seems cool, it also seems like a little bit of work could be put into making it more accessible to people who didn't write it.
Also, how about support for category pages? I mean, show a page full of posts from one particular category only? Possible??
it's a little discouraging, however, that the issue queue has so many pull requests, comments, etc. without comment.
i love the concept of simplicity and i'm sure the maintainers have a roadmap in mind. it'd be nice if that was communicated a bit so i can know how simple they plan to keep it.
Which part is open to "objective measurement"?
I don't know if I'm a good programmer, whatever that is, but I use static type systems when I want a system to work both today and tomorrow after I refactor it.
Maybe time to reconsider if I use OmniGraffle often enough to pay $50 to upgrade to 6... I don't really need any new features, just to fix all the bugs that have cropped up.
But it looks really nice overall.
"Public notes" are a construction of blockchain.infodespite the confusing namethey are most certainly not part of the actual Bitcoin blockchain.
Most of the 'Chromebook as a developer laptop' make it into a fancy SSH terminal - which is fine. But I don't really see a 'terminal' as a 'developer laptop'. I understand you can still access the internals of Linux along with the shell and other languages, but it doesn't work for a lot of people when you can't use the same apps/devices as you can on a your 'normal' machine.
A $999 XPS 13 or MacBook Air looks pretty inviting, even at 3X the price of a $299 Chromebook when you realize how much productivity gain there is when you don't have to dink around with the OS and aren't limited in your app selection.
If the hardware is particularly good (is it?), why doesn't someone make it easier to just straight up run Linux from the hard drive? I mean no weird scripts, SD cards, etc. but just a proper distro like Debian.
The data for Korea clearly isn't accurate. Baidu is almost unheard of in this country. Not many young people can even read Chinese. Naver, a local company, dominates 70% of the Korean search market and a significant portion of the social networking scene as well. Its anti-competitive behavior is currently a hot topic in Korea.
I suppose the anomaly is due to the fact that the authors used Alexa (mentioned in the bottom right of the second image). Hardly anyone in Korea has the Alexa toolbar installed. People here, like elsewhere, pollute their PCs with all sorts of other toolbars, but rarely Alexa. The language barrier probably plays a part. I wouldn't be surprised if those who do have Alexa (usually foreigners) tend to have ties to a certain neighboring country with a very large population.
Does that mean they aren't defining Google as Google properties? What would the map look like if they did?
Another interesting thing to measure would be the most popular national website, ie. for denmark, the most popular .dk domain, and then represented in size per population.
EDIT: I'd hypothesize that the top website would be whatever bank has the most customers, or websites for government functions.
Also, there should be a new version of RISK: The Game of World Domination. It should feature tech companies as the attacking hordes.
#2 Google has so many popular products (search, gmail, youtube, maps) that it makes sense that they're that big. It's equivalent to a person having a bank account with $1bil in it. Just leaving that money in the account and raking interest, you just continue to get bigger by being. In google's case, there isn't strong enough competition to stop them from "being" and gaining more share based on their prior efforts.
#3 Could a new US based search engine compete with Google? Or are they just that big that the task is a fool's errand?
At number 4 with 1.58% of total campaign moneys, East 103rd Street Realty. Parent corporation: Glenwood Real Estate Corp.
"Luxury Apartment Rentals in New York City"
Obviously the fact that people are renting their apartments is not something that buildings are capable of managing for themselves and has become a matter of great importance to the entire state of New York.
I know that a lot of people on this site think that if you add the words 'on the internet' you should be exempt from all regulation and taxation, but that's just not how the world works.
I hope that the people who've been profiting from the lack of enforcement are forced to play on a level playing field.
Disclaimer: my experience with NYC AirBNBs have been incredibly negative, including people listing with fake names, revealing that they'd given fake addresses at the last minute (when it was already too late to change plans), showing deceptive photos, and giving false descriptions.
landlords will end up posting offers on Airbnb directly ,that's what will happen "en masse" in the future.
And You'll need to pay the Airbnb premium to rent anything.
After all , if i'm a landlord and ask for 2000$ a month for normal tenants , i can just go on Airbnb and ask for 200 or more an night , so i only need to rent it for 10 days a month for it to be profitable...
Landlords arent stupid we'll eventually end up in that situation on a large scale.
However, users data. I don't know, there is a thin line between enforcing the law and not respecting citizens freedom and privacy.
The housing argument, instead, is just ludicrous. Let's put all the hotels in the city out in the suburbs then.
The issue at question is a tiny (% wise) hotel tax. In my view AirBnB is not resisting this tax for its own sake, but rather because it risks classifying hosts as hotel operators.
AirBnB is a wonderful service, and yes it has flaws, but NYC is caving to the demands of political interests who know how to play the lobbying game.
(obviously the other issues about whether it's even legal to rent out in the first place is another matter)
"State is concerned about hotel occupancy taxes and possible evictions by greedy building owners."
Greedy is a value judgment with big negative connotations.
I did a Airbnb stay in Rome last year...the gov't there has been trying to better enforce its tax laws, even to a comedic degree (you, the customer, can get in trouble for walking out of a gelato shop without a receipt). My host made very sure I signed the right paperwork after my stay...and I'm guessing as long as the state gets its share, it has less incentive to crack down on it.
But it also reveals, and people should realize how much of our society is actually a facade. This is not liberty or freedom, for you to do with with your apartment or home how you wish. Next thing you know the government is going to subpoena Craigslist to hunt down sales taxes for the banged up table you sold?
This is subversion, perversion, and corruption of public resources and policy to ensure private profits and gains because corporations really don't compete in America, they simply rig a system that makes is look like competition. Our economy is fraud, just like that Western Town at Disney World is a fraud.
Next up, you have to have a tracking device in your car to make sure you pay your taxes for being a DD and they buy your food and soda in exchange.
> These are not things we can protect against directly but again, we can make it extremely difficult for these things to occur by using strong encryption and careful systems monitoring. Were anything like this ever to happen we would be talking about it very publically. Such an action would not remain secret for long.
> Ultimately though, our opinion is that these kinds of attacks are no different to any other hacking attempt. We can and will do everything in our power to make getting unauthorised access to your data as difficult and expensive as possible, but no online service provider can guarantee that it will never happen.
This kind of frank disclosure should be highly rewarded. I provided similar frank disclosure text (elsewhere) only to have it whitewashed.
When everyone is underplaying the real limitations it's impossible for people to choose alternative tradeoffs "Why should I use this slightly harder to use crypto thing when foo is already secure?" because the risks have been misrepresented. Underplaying the limitations also removes the incentives to invent better protection "Doesn't foo already have perfect security?".
This is not true. The Australian Crime Commission has some of the most extensive secret coercive powers in the Western world.
I would suggest that either:
a) Fastmail is aware of this and is covertly spreading the word that it might be compromised; or
b) Fastmail needs better lawyers.
"There are of course other avenues available to obtain your data. Our colocation providers could be compelled to give physical access to our servers. Network capturing devices could be installed. And in the worst case an attacker could simply force their way into the datacentre and physically remove our servers."
As the colocation providers are based in the U.S., they would be subject to the National Security Letters. FastMail claims this is no different from any other hacking attempt. But in a normal hacking attempt, colocation providers would be free to explain to FastMail the extent of any hacking on their end. Moreover, hackers typically do not have physical access to any data. Even with encryption, physical access opens up a lot of attack vectors that most sysadmins don't anticipate.
It's much easier to compel operators to do something (through legal threats or potentially physical threats) than it is to do any active modifications to a complex system, undetectably. Passive ubiquitous monitoring is a concern because it's passive and thus hard to detect -- it's highly unlikely TAO can go after a large number of well-defended systems without getting caught. Obviously they'd be likely to hide their actions behind HACKED BY CHINESEEEE or something, but even then, it's relatively rare to have a complete penetration of a large site in a way which isn't end-user affecting, and rarer still for the site not to publicize it.
That said, if I wanted to compromise Fastmail, I'd either compromise a staffer or some of their administrative systems to impersonate staff.
Look at what they did to megaupload.com.
Just so we're clear, the point of this post was not that we don't think the rules don't apply to us. Instead we're trying to make it clear where position on these things are. The topic of this thread is a sensationalist sound-bite, nothing more.
I'm not going to go over the points again here because I'm pretty sure we said it all in the post (but ask questions if you like, I'll be here all week!).
The most important point to take away from this post is that your privacy is your responsibility. We're trying to provide you with as much information as we can to help you determine your own exposure, and to let you know what we will work to protect and where we can't help. Its up to you to determine if our service is right for you. No tricks, and no hard feelings if you'd rather take your business somewhere else!
This in combination with FastMail being acquired by its former employees, coupled with their investment in CardDAV and CalDAV, makes me really excited about them. I was actually looking for a good replacement to Google Apps and FastMail might be it. It's still a little expensive though, compared to Google Apps, I hope they'll bring those prices down just a little.
I realize that even if the servers were in Norway, an email from a FastMail user to a gmail.com account would still be read by the NSA (because it would pass through American servers), but email sent from FastMail to other email hosts in relatively safe countries would not be read by the NSA.
That's a very small part of a lot of what we have to say, most of which is:
* we can't be compelled (under current laws) to install blanket monitoring on our users
* we can't be compelled to keep quiet about penetration that we notice
* there are always risks, including the risk that any random group knows unpublished security flaws in the systems that we use
We have written some things about techniques we use to reduce those risks (physically separate internal network rather than VLANS on a single router for example) - these help protect against both government AND non-government threats. But we can't make those risks go away entirely.
What we're saying is - the physical presence in the USA only changes one low-probability/high-visibility threat, which is direct tampering with our servers.
Regardless of the physical location of servers, we would still comply with legally valid requests made through the Australian Government.
It is our belief and hope that this process is difficult enough to mean that US agencies only ask for data when they have good cause rather than "fishing" - but still easier than taking our servers and shutting us down, with all the fallout that would cause.
Unless you're using PGP or S/MIME, SMTP is still most often unencrypted.
(Ianal, ianaa, but I am pretty sure I am correct on this point.)
Next time I'm out shopping for email services, I will give my moeny to them! (And, to give something back for all the Tim Tams brongondwana brought with him to Norway ever time he was on a visit ;) )
So FM should move their servers out of the US even if that's inconvenient.
This is why I use a email service in Norway (runbox.com), which, as far as I know, is not sharing information by default.
So maybe they don't get the NSL, but the people/group/company that is handling the servers might. This seems disingenuous. I could be wrong, but it feels like they are making claims that will dupe people into their service because they feel safe.
Why would you do that, especially when you're not even a US company?
 http://www.eia.gov/dnav/pet/hist/LeafHandler.ashx?n=PET&s=WR... http://content.usatoday.com/communities/ondeadline/post/2012...
Gas is a different story but currently the price is pretty low compared to oil.
I'm all for subsidies to accelerate an emerging market or technology, especially if it's "the future", but for 5-10 years at most, until it becomes mature enough to handle itself. Subsidizing highly profitable companies for a century is beyond stupid, because it also means those companies can be a lot more wasteful, knowing the taxpayers will cover the difference.
I'd ask for an end to Middle East oil-wars, too, but that seems even less likely to happen, so I'll happily take the ending of subsidies for now.
All that profit doesn't seem to be staying inside the US.
<meta content="!" name="fragment">
Google has documentation on this here: https://developers.google.com/webmasters/ajax-crawling/docs/... and we've been using this method at https://circleci.com for the past year.
Waiting for google to request the page with _escaped_fragment_ should also prevent you from getting penalized for slow load times or showing googlebot different content.
Googlebot requests page -> your webapp detects googlebot -> you call remote service and request that they crawl your website -> they request the page from you -> you return the regular page, with js that modifies it's look and feel -> the remote service returns the final html and css to your webapp -> your webapp returns the final html and css to Googlebot. That's gonna be just murder on your loadtimes.
If this must be done, for static pages, it should be done by grunt during build time, not by a remote service. For dynamic content, it's best to do the phantomjs rendering locally, and on an hourly (or so) schedule, since it doesn't really matter if googlebot has the latest version of your content.
Or perhaps I'm mistaken and the node-module actually calls the service hourly or so and caches results on app so it doesn't actually call the service during googlebot crawls. If that's the case, I take back my objections, but I'd recommend updating the website to say as much.
So exactly what is this project delivering as fallback content? A server-generated website?
but as with every technology, there are some tradeoffs
a) serving google a different response bases on the user-agent is the definition of cloaking (it's not misleading or malicious cloaking, it's cloaking non the less)
b) you hardcode a dependency to a third party server - you have no control over - into your app (and from the sample code on the page, there is no fallback available if this server is down)
c) there are latency/web-performance issue i.e.: for a first time request by a search engine the roundtrip would look like so:
[googlebot GET for page -> googlebot detected -> app GET to prerender.io -> prerender.io GET to page -> app delivers page -> prerender.io returns page to app -> app returns page to googlebot]
this will always be slower than
[googlebot GET for page -> app returns page to googlebot]
so basically the prerender.io approach creates some issues. said that. we don't have - yet - another "no tread-off" solution
the "make ajax crawlable" approach basically allows - non malicious, non misleading - cloakinghttps://developers.google.com/webmasters/ajax-crawling/docs/...
(sorry google, but ?_escaped_fragment_= was really one of your must stupidest specs ever, even worse then "nofollow")
so if you target "?_escaped_fragment_=" in the GET request, and not the user-agent cloaking a.k.a. sending different responses is ok
but: it creates a double googlebot crawl issue i.e.:
[googlebot GET http://www.exmaple.com/test -> googlebot parses HTML and finds <meta name="fragment" content="!"> in the HTML -> googlebot pushes http://www.exmaple.com/test?_escaped_fragment_= into its "stuff to crawl-queue" (a.k.a. discovery-queue) -> googlebot crawls http://www.exmaple.com/test?_escaped_fragemten_= -> gets server side get request (or if you would use a prerender.io service the whole roundtrip to the prerender.io site would start) ]
this is a no go if you have a big site with hundred of thousands to millions of pages.
and there is another much, much bigger issue:
* showing JS clients * and "other only-partially-JS clients" (google parses some JS) different responses
why? if there is no direct feedback, then there is no direct feedback!
non-responsive mobile site currently offer overall poor user experience, why? because all the guys working on the site sit in front of their fat office desktops. no feedback equals crap in the long run.
and it's worse for "for robots only" views, because people just don't have to live with the crap they server spits out, as they always just see the fancy JS versions. since the hashbang ajax crawl-able spec came out it consulted some clients on this question, everyone who choose the _escaped_fragment_ road anyway did regret it later on. even if the the first iteration works, 1000 roll out later, it doesn't - if there is no direct feedback, then there is no direct feedback.
conclusion: if you have a bit site and want to do big-scale (lots of pages) SEO you are stuck with landingpages and delivering HTML + content via the server + progressive enhancement for functionality, until the day google get's its act together.
and for first-view webperformance i recommend the progressive enhancement approach anyway, too.
If it's pre-rednered, it's missing something. If it has all the data at first, then it's not dynamic.
It's a multipage app, that uses ajax to function as a singlepage app. From the user's point of view it's a singlepage app, but it's accessible from any of the URLs that it pushStates to, so it's like the best of both worlds. It's fully crawlable because it functions as a multipage app, but it's got the speed of a singlepage app (if your browser supports pushState)
Google is less important (they already execute JS), however it's good for sites like Facebook (which doesn't when you share a link).
Similar idea, but implemented without a server for rendering, with a phantomjs process. And only for rails/rack app.
I have not tested it yet, but I wonder if the speed of render will penalize you in the google results. Seems like a separate machine with a good CPU might be worthwhile if you are going to run this.
- Try to go to prerender.io, press "Install It -> Ruby on rails". Now it loads the ruby on rails example.
- Then go all the way down and change to "Prerendereed content". Pressing "Install It -> Ruby on rails" doesn't do anything now.
Shouldn't it render the same content? "Add the middleware gem to your Gemfile..." and so on.
It uses phantomjs but removes all the styles initially so the rendering time is much faster. (my ember app was averaging 70MS to render, but i prefetch the page data)
Looking at the prerender's source I did't see any caching mechanism.
What kind of load times have you see rendering your apps?
Have there been recent significant improvements in phantomjs's performance?
His review "The Synaptic Vesicle Cycle" in Annual Reviews offers a somewhat accessible look at the critical bits.
 Use scholar.google.com if you want to find "liberated" PDFs.
Did you start from a template or from scratch? If you used a template can you point me to it?
When the Vikings established their colony in Vinland, they wished to establish peaceable relations with the Native Americans. They invited the local chiefs to a party at their longhouse, in which they served an amazing new drink -- milk -- which the Americans had never seen before. The following morning, suffering from intense abdominal pains, the natives accused the vikings of trying to poison them, and promptly laid siege to the colony until the Vikings packed up and buggered off.
But for this incident, it's entirely possible that the Vikings might have established a durable colony in the Americas, leading to contact between the old and new world 500 years earlier.
Similar to how presidential elections are impacted by a 100 million year old coastline , this small genetic mutation has affected the entire world's economy, especially agriculture-based industries. Once we enter the post-natural-selection era where we can select the DNA of our offspring, I wonder how different will the long-term future be.
Well, Spain is not entirely sunny. Certainly the north of Spain is not. You can see a darker line in the map that divides Spain. This is divided by a set of mountains called "Cordillera cantbrica" that makes the north way more rainy than the rest of Spain.
There are places in Spain where it rains more than in the north, but only in a very small period of the year. Most of the year is sunny there(the part that is not the north).
Its a fairly academic book oriented more toward the spread of proto-indo-european language and related topics. And I got the pointer to that book from a podcast delivered lecture series "WS3710 History of Iran to the Safavid Period" a tolerable recording (tolerable from a technical standpoint; OK to listen to, but not going to win any awards for audio engineering). Its a recording of a class at Columbia from 2008.
My interpretation of the book and lecture series is people kept livestock for quite a long time before some mutant gained the ability to drink milk, which given the herd of meat animals meant they gained a lot of nutrition compared to the non-mutants, which is a huge survival gain.
I've found I enjoy university lectures much more now that I don't need to take midterms and write papers, so thats pretty much all I listen to.
Raw milk contains lactase producing bacteria, so anyone consuming milk in its raw form would be able to digest it without any digestion problems. Only pasteurized milk is lactase free as heating destroys the bacteria that produce the enzyme. Most milk historically would have been consumed raw so an adaptation to produce lactase would have been unnecessary and would not provide a significant competitive advantage.
Personally I am mostly interested in Asian history. We know that South Asian and Tibeto-Burman people have been milk and cheese consumers for a long time, as have the Central Asian peoples including the Mongols, who are said to have actually preferred liquid foods to solid ones.
These days, I know first hand that a lot of people in China are getting stuck in to milk products for the first time. How can this be, if they should keel over in pain and indigestive flatulence? The only person I've ever seen mass-produce cheese in an apartment was a Burmese friend in China, who I'm sure wasn't after the lactose for its apparent usefulness in diluting heroin! Wikipedia states: Some studies indicate that environmental factorsmore specifically, the consumption of lactosemay "play a more important role than genetic factors in the etio-pathogenesis of milk intolerance" ... ie. the intolerance notion is largely bullshit and people can adapt to lactose. That seems to fit the observations.
Anyway, interesting to ponder... I went and polished off a block of New Zealand cheddar to celebrate. (Saving Roquefort for a salad tomorrow, then it's off to Indonesia where cheese is no doubt harder to find!)
I love how this little plus provide in the long run an overwhelming advantage.
Tangent mode on:
Somebody really, really needs to write the How To Deploy Rails Without Losing Your Sanity handbook. I will buy a copy. It will sell thousands of them.
A lot of the problems with people's interactions with Capistrano are environment/ops problems which have known solutions that work, but which rely on people having a great understanding of arcane trivia which is spread across conference presentations, blog posts, commit messages, and the practical experience of the best Rails teams. Unless you're prepared for an archaeological expedition every time you start a new Rails project, you're going to do something wrong. You should see the bubblegum and duct tape which I came up with, and it mostly works, but I know it is bubblegum and duct tape.
Non-deterministic deploys of code from (usually) un-tagged source control
I feel lucky in that I was mentored by an engineer who decided to teach me, one day, Why We Tag Shit. But for the Why We Tag Shit discussion, I would be like almost every other intermediate Rails engineer, and be totally ignorant of why that was a best practice until lack of it bit me in the keister, at which point the server is down and one has to rearchitect major parts of the deployment workflow to do things the right way. Why We Tag Shit is only about a 500 word discussion, but it's one piece of organic knowledge of the hundreds you need to do things right, and it is (to the best of my knowledge) not covered in docs/QuickStarts/etc because that seems to be out of the purview of the framework proper (I guess?).
I'm sure that I'm ignorant of several of the hundreds of pieces of things one needs to do to do deployment right, as evidenced by my fear every time I execute my deploy scripts. I, and I must assume many other companies, am willing to pay for an option which gets me to a non-bubblegum and duct tape outcome.
Seriously, folks: there is a product here.
The fact that people read this article, and don't feel the need to mention his fear of releasing software just shows how broken things are. It shouldn't be an accepted fact of open source that if you release new code that might be backwards incompatible, you get vitriol for it.
... but I too cowardly to release it and make it mainstream, as Im afraid it'll destroy whatever good will for open source I have left when the flood of support questions inevitably comes in, followed by all the people who are unhappy with what I've built and feel obliged to tell me how bad I am at software.
* Ditch v2 ASAP (seems like you've already decided on this). It's pretty obvious you aren't motivated to work on that codebase anymore. I've looked at v3 and it's much better thanks to relying on Rake tasks.
* Be selfish. It's your project so if you think v3 is the way to go forward, go with it and who cares what the "community" thinks.
* Seems like you already have a few people helping out, so continue and maybe make formal "core" team. There's nothing with yourself taking a step back from the heavy coding. But I believe that Capistrano would be better with your guidance than without it.
codebeaker: There was no mention of Harrow in that post. Are you still working on that? I'd assume that if you were you'd continue work on Capistrano since it's based on it.
I'd suggest Lee runs a Kickstarter type thing and I'd very happily throw in $100. But I don't think he will because it doesn't seem quite right.
So here's a (wild and completely off the cuff) startup idea - a pre-emptive Kickstarter. Someone creates the project "Lee Hambley, continue working on Capistrano." and we all pledge into the pot. If Lee agrees to do it, he gets the money. If not, we don't pay anything.
As a result various releases of v2 were buggy. Capistrano is a hard to test application agreed but its test coverage is plainly woeful.
About 6 months back when 2.4.12 release was broken (https://github.com/capistrano/capistrano/issues/434) I suggested to remove asset pre-compilation stuff from Capistrano. Capistrano is a general purpose tool, company where I work we use it for deploying java, php, ruby and all sort of stuff. I don't understand why it should have poorly tested asset pre-compilation things built in.
I don't know what made Lee work on a rewrite. I can only imagine how difficult it must have been for him to work on something so big singlehandedly while running a company.
His last point is very valid about using RVM, rbenv etc in production. I don't know why people do that. Does that make it easier? Aren't people aware of something like - https://launchpad.net/~brightbox/+archive/ruby-ng ?
What you want to do is build a single package of everything your application needs (which includes the application code and all dependencies -- libc and up), then copy that package to the production servers.
It shouldn't matter if your application server has Ruby 1.9.3 and you need 2.0.
It shouldn't matter if the last deploy of your app needs Nokogiri compiled against libxml 2.8 and you now need 2.9.
It shouldn't matter if you are running 5 different apps with 5 completely different set of dependencies on the same machine.
It shouldn't matter if you need to use the asset pipeline.
It shouldn't matter if github or rubygems drops out half-way through the deploy process.
All the production server should get is a single package of all that your application needs, then a 'restart application' command.
Docker should be able to handle all this simply.
If I had a Mac I'd skip the ad-hoc Ruby environment switchers and skip straight to Vagrant.
If your tool sees any sort of uptake, it suddenly no longer is yours. The community suddenly expects you to not only to continue to modify the base code to improve functionality, but to also adhere to a sort of backwards compatability so that everything they know and loved about your baby never changes.
I can't imagine how much more taxing this would be once the tools you built become integral part of other team's workflow. The burden and stresses of keeping "the world" afloat would cause many a sleepness night for people of strong constitution.
I have used Capistrano a lot, I built my "default" setup, compiled it into a gem, and released it here: https://github.com/kaspergrubbe/simple-capistrano-unicorn and moved on with my life as a developer.
I know of at least two bigger organizations that depend on Capistrano (and my gem) for their deploys. I feel like Capistrano is the way to go if you manage your own servers, and need to deploy to them.
Capistrano started my Rails experience, and I am very grateful for the work put into it. But I never wrote and said "Thank you" or "Great job", maybe we need to be more vocal to the people that put in time and energy to build the software that we use a lot.
At our company, we develop multiple RoR apps and we've run into many of these issues (mostly related to the asset pipeline), yet none of them actual problems with Capistrano. Since it's the bridge between so many things, I can imagine why it's easy for it to become cannon fodder.
We've tried to standardize many of our recipes such as local asset precompilation into a single cohesive gem (https://github.com/innvent/matross). That has saved us the trouble of debugging the same issues over and over when they inevitably pop up across applications.
Kudos. It takes a lot of courage to admit your baby is not going to fulfill the future you had initially imagined.
I believe when you said that PAAS will go, only reason I use heroku and dokku(from docker) is due to its easy deployment.. and for no other reason than deployment..
Why would exchange rate affect what percentage of bitcoins 600k are! This is terribly confused thought from the journalist:
600k bitcoins, that's "almost $80m".
$80m! But at current exchange rates that's "just over 5%!".
When sitting there are the actual numbers of bitcoins in his wallet and bitcoins in circulation...
Living in San Francisco allowed the feds to pick them up on their lunch hour. Even just hopping the border to Mexico would've required them to get international cooperation and extradite him.
He'd likely be a free man if he were in Croatia or Kazakhstan.
For a currency to be so secure that a state cannot seize it from a citizen is unprecedented.
It will be fascinating to see how this plays out.
Yes ... if no backups exist.
Isn't this totally false? He could have easily made backups of the wallet and even given copies of it to others. I'd expect there's a whole bunch of ways that these bitcoins could still get spent?
- you can make indefinite copies of your locked wallet so TLA basically can't confiscate it
- you can protect your wallet with a secret (passphrase and/or a key) so that they can't unlock it
- you can distribute the secret among several people using one of the secret sharing protocols
- or hide it steganographically in an ordinary file while the TLA in question still can't prove it's there
Ugh, no. Having the file means you can transfer out the bitcoins. Anyone having the file can transfer out the bitcoins, so the FBI securing that wallet doesn't lock down those bitcoins.
The FBI cannot properly "seize" the bitcoins unless they use the wallet to transfer the coins to a fresh address they make and control. And I'm not sure that traditional seizure laws allow that, because AFAIK we've never had this scenario before.
 Obligatory XKCD http://xkcd.com/538/
The reference client (I's not mentioned which wallet software) uses hundreds of thousands of rounds of key stretching, enough that on a GPU you're only getting a few attempts at the key. Might irritate them enough to crack out a good sized farm.
I'm wondering if that conversation has come up among management at FBI and what the outcome has been.
Surely you can make copies of your wallet and keep them in various secure locations?
Not some subtle but plain simple. Like using personal email to register on forums and promote SR and recruit people.
Using stackoverflow with personal email, again... Yes they are not some solid evidences, but made the FBIs life, to get the guy, much easier. You might say that it's easy for me to point out those mistakes but they are so basic and it's not that he was running some Nigerian type of scam, he must have been way more careful.
Or keep messages about the 'murder-for-hire'. Yeah it's rather obvious that no one got hired but good luck explaining this negotiation tactics to the judge. Plus he mentions $80k for other 'murder-for-hire'. I'm really curious how the thing with this 'murder' will end up. I mean he could have just simply deleted them just in case. It's not FB that it would stay forever...
Saying "The Bureau is in a position equivalent to having seized a safe belonging to a suspect with no idea of the combination and no hope of forcing it open any other way." is completely incorrect.
(This is relevant due to the fifth amendment to the constitution. In many cases, turning over a password or combination is considered self incrimination and thus cannot be compelled by the state.)
So Ross, how about you tell us the password and instead of 14 life sentences you get only 20 years?
Certainly something like this would have a big effect on the BitCoin market. I'm interested to see what happens in the future.
EDIT: I mean exploting this bug: https://github.com/bitcoin/bitcoin/issues/2838
He is active on Kaggle.com too.
For more practical ML projects see: https://github.com/amueller
There might be some great stuff here, but many of your potential audience will never find out, because they'll give up.
Why the [Classification][100K sample?] checkpoint?
And more info in general about this whole cheat-sheet.
1. Runtime-linking _is_ dynamic linking, and it's a PITA that Java doesn't have an option for static linking, especially given the inherent fragility of the CLASSPATH.
2. clang and gcc have compatible command-line option syntax.
3. The options example he lists for gcc is a pure strawman. Maybe they are necessary to compile that particular source file, but it is not necessary to use all these options in the average case.
4. The article title says "now more than ever", but there's really nothing about recent developments here. This article could have been written ten years ago.
This really just seems like crappy linkbait.
JVM switches like -XX:CMSInitiatingOccupancyFraction=70 -XX:SurvivorRatio=2 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:NewSize=2048m -XX:MaxNewSize=2048m (taken from real recommended settings for an open-source project) are, of course, not opaque-to-the-average-user at all :)
The resulting code is massive walls of text. 90% of that is machine generated through eclipse. This is an indication that the language idioms are unable to support the current complexities in application programming trends. And you have to interplay with them heavily to squeeze out usable programming logic.
That doesn't end there. Perl is older than Java, yet despite that I see Perl can support a lot of idioms far far better than Java can with its bulky frameworks.
Its just that the language is beginning to show its age.
The only reason to use Java these days is basically availability of super low cost devs, Legacy code, tooling etc. Basically for reasons as with any tool that has an advantage with age.
What I mean by that is simply that a lot of developers are prejudicious against Java due to historically it being slow and having tedious development feedback cycles.
We use Java extensively (and Java EE 6) in an agile IT business and it is truly an asset. I encourage others to look in the direction of Java.
There are still fans of Big Band music!
Java fans (or maybe I should say, employers) who want to keep exhuming this dead horse might be well served to emphasize "you can get a job" and "it's enterprise" and "JIT makes Java faster than Assembly" as they have been doing for decades, rather than tarting it up (a Java logo with an electric guitar, seriously Dad?) and comparing it to languages which are even more ancient. Java is closing in on 20 years old...
- Python: slow, dynamic typing
- Scala: slow compilation, very complex syntax, worse tooling,
- C#: MS-centric (but other than that C# is superior to Java)
Doesn't the JIT do all sorts of crazy stuff that's not always reproducible and thus hard to profile? I know this is the case with V8, but probably also so with Java. I don't know much about compilers, but the author seems to know even less.
The irony of the article is that ZeroTurnaround not only make money from the java community, but does so by selling tools that work around shortcomings of the jvm. So not only is it in their interest to have a large dev community on the jvm, if you are a bit mean you could say its in their interest to have a crap jvm :)
(As I was in the middle of this reply, the Java Auto-Updater popped up and stole my focus!)
The algorithm used here is GNFS (general number field sieve), which is the same algorithm that's been used for about two decades. In other words, this has no impact on the security of RSA.
More information: http://en.wikipedia.org/wiki/Integer_factorization_records
EDIT: Wikipedia article says 696 bits.
 https://en.wikipedia.org/wiki/RSA_Factoring_Challenge https://www.google.com/#q=%241+%2F+(%240.09+%2F+kwh)+%2F+100...
> Mon Sep 23 11:09:41 2013 commencing Lanczos iteration (32 threads)
> Mon Sep 23 11:09:41 2013 memory use: 26956.9 MB
> Thu Sep 26 07:17:57 2013 elapsed time 51:56:44
Im not sure that Im understanding all the details. Does this mean that they factored a 210-digit number in 52 hours in a single machine?
Really? Oh yes it's right there under "Pulitzer, Nobel, and similar prizes." http://www.irs.gov/publications/p525/ar02.html#d0e8326
Stay classy America.
The winners buying real estate need to look out for maintenance costs and taxes... better expect to spend 5% the cost of the house, or more, annually, for taxes and utilities and upkeep, so if the prize is more than perhaps 10 times your annual income you're eventually going to be in a world of hurt. I've had some relatives end up land-poor and its not a pretty sight. Here's 5 million dollars of lakefront property. Whoops he doesn't make 500K/yr.