While the customer may be financially powerful in the relationship, I feel that tipping culture gives power to the server to withhold good service as a punishment or as an optimization strategy at their own discretion. It causes a server to judge you as soon as you walk through the door... will this person give a good tip? Should I ignore them and focus on this other table?
The worst part is that the tip happens at the end of the meal after all of the 'costs' of providing good service are already done. If the patron stiffs the server, then the effort was 'wasted.' It's far better to make an educated guess based on what? the way they dress? their grammar? the car they pulled in with?
It's a terrible system.
This is just one data point, but the Linkery was infamous around San Diego for having much worse service than other places in a similar price range. I'm convinced their experiment with tipping was correlated with this.
The whole idea of mandatory "service charges," or "fees," in any business, is kind of bizarre. It seems strange that we've accepted that certain types of businesses (airlines, hotels, ticket brokers, in some cases restaurants) should list prices that differ significantly from the actual price charged. There does seem to be some backlash against this practice: Kayak, Hipmunk, and many other travel sites now list the full price of airline tickets (though, often, not hotel rooms, with their "facility charges," whatever those are). And today I noticed that StubHub now shows prices inclusive of all fees. I understand why a business would like to list prices that are 30% lower than what the customer actually pays, but it seems a little odd that we're all OK with it.
 - http://en.wikipedia.org/wiki/David_Chang
[Edit - Whoops. As a San Diego resident, I feel bad for neglecting to mention that The Linkery was awesome and like lots of folks here I'm sorry to see it go.]
But I know that I've not gone back to restaurants precisely because I didn't like interacting with the staff, or I didn't like how they interacted with my guests.
And looking back, the most specific I could be about it was "well, the waiters were kind of intense." You know what I mean. They were professional, they did their job, but ... they were intense. And they didn't need to be; we're going to give them 20%, but they don't know that. So they're ... slightly intense, forward with their presence, so you won't dare undertip, instead of melting into the background and letting the food and ambience dominate.
In a restaurant like this guy posits, waiters aren't compliance professionals.
On the other hand, in a tipless restaurant, they aren't paid based on merit, so maybe they won't be as motivated to do a great job in the parts of their work that require concentration and diligence.
But they're doing a job that a robot should be doing as soon as possible, and a whether my server is good, great or okay isn't going to affect how my food tastes.
This is in Massachusetts that does have a tip credit and where servers rarely get paid the statutory minimum wage if it is a slow night (against the law but it happens). Just for reference, the server minimum wage is $2.63/hour. If you can't make enough in tips because its a slow night and one of your coworkers is bringing the tip average down, your incentive drops off dramatically as well.
For something like this to work nationwide, the tip credit has to go.
And the kicker, the service is always good. Because if the server isn't nice, they'll probably get fired. Because guess what, serving food and being polite about it is their job. That is what they're getting payed for. So demanding extra money to do their job with a smile just seems too weird for me.
 The author of the infamous torture memos, that argued that it isn't torture unless it's as painful as losing a major organ.
2 places have no tipping in New York City:- Sushi Yasuda - considered by some the top sushi spot in the city. (Certainly one of the most expensive)- The tap room at Whole Foods - let's just call it a little more lowbrow.
My budget hasn't encouraged me to visit Yasuda in several years, but I will say that I like not having to pay tips at the tap room. It certainly makes an inexpensive place seem even cheaper, and their service hasn't suffered for it at all.
I'm not trying to be difficult or argumentative, I'm genuinely wondering. I'm pretty anti-tipping and pro-pay-what-you-will myself, and I'm just wondering if I'm fooling myself now.
However, that view is based on the UK system where minimum wage for the serving staff is enforced and where tip pools are allowed.
Conversly, leave a tip on a credit card bill if you want to ensure that the entire tip is subject to any tip pool.
 I have some knowledge of this in the mid Atlantic states, but the author is speaking about the West. Maybe it is different over there, maybe it isn't.
"However, to give the tip money to every worker would be illegal. The law is historically very clear the $220 in tips belongs to the two servers only, and cannot be distributed to any other employees." ?
>if one job gets a $2/hour raise, that most likely means that another job will have its wage reduced by $2/hour.
This statement right here sets up half of this posts argument here and isn't realistic at all.
That was contracting, so I needed to come back to LA to refill the travelling stack. It doesn't take much work to pivot that into a SaaS product or two that replaces the same amount of income. Again, maximising for free time and flexibility with profit being a nice side effect.
Then it gets really good.
I'd love to hear some passive income hacker (PIH) failures, risk & reward, opportunity cost in terms of time spent on your own venture vs. spent on climbing the corporate ladder and vs. traditional passive income paths say, becoming a slum lord, dividend investing etc.
The problem with these types of posts is that they make it seem like passive income is easy. In fact it is a hell of a lot of work to create, say, an info product, and marketing it can also be a full-time job. You need to have surplus income and time to take on the risk of doing this, which is hard to do if you're an employee. This probably explains why most of the passive income stuff we read on HN comes from consultants.
I probably only make about $20-$25k a year off a few low maintenance projects, taking up random contracting gigs when I feel like.
The experience of seeing new places and meeting new people is priceless. No amount of money, equity or incentives can ever make up for that.
If I could change one thing, it would've been quitting my full time job even earlier.
I can code/debug for a corporate in my sleep but creating entire products on one's own is a whole different ball game. I'm tired of these articles making it sound _so easy_.
"That's a shame, because as a programmer in the 21st Century, you're in a unique position to do something that most people simply can't; live a life with adequate income, lots of time and total freedom over what you do with it."
That sounds great, but how?
Harder than you thought it would be, but easier than most just-starting-out passive-earners would expect, if that makes sense.
I have the same view of wealth as you and I am a programmer at a startup. I wasn't hugely interested in this but now I am.
This takes a lot of luck, in general (more than work) or/and a lot of ruse and malice.
Otherwise, if everyone just end up being a founder, it obviously doesn't work either. You need minions to do the actual work. If you don't wanna be a minion anymore, someone else has to be. Never ending loop.
You can achieve the same balance and freedom with a part-time job, freelancing, as a PIH (although I don't think the OP actually earns passive income in the accounting sense) or some other work arrangement.
What's most important to have the kind of freedom that the OP describes is to first have a relationship with money that supports it.
Some resources are richer than others.
The report appears to find that MIT should not have changed its neutral stance, which is disappointing, and I'm skeptical. Here's a quote:
Given the lead prosecutors comments to MITs outside counsel (see section III.C.3), MIT statements would seemingly have had little impact, and even risk making matters worsealthough this information was not shared with Swartzs advocates.
Keep your eyes peeled for a response soon.
However, the report says that MITs neutrality stance did not consider factors including that the defendant was an accomplished and well-known contributor to Internet technology; that the law under which he was charged is a poorly drafted and questionable criminal law as applied to modern computing; and that the United States was pursuing an overtly aggressive prosecution. While MITs position may have been prudent, the report says, it did not duly take into account the wider background of policy issues in which MIT people have traditionally been passionate leaders.
How on earth is a firm supposed to make sense of nonsense like that to inform their device targeting?
I was first puzzled and am now intrigued by their choice to use physical screen size as a basis for that diagram, as opposed to screen resolution. Very appropriate in our resolution-independent times. Of course either way you do it, Android is going to have more variation than Apple. That diagram is also kind of difficult to read; what shade of blue corresponds to what market share?
Finally, it's awesome of them to share the source data! Maybe I'll actually get around to implementing my suggestions.
Nowadays we hardly think twice about the fact that there are millions of combinations of monitor and GPU brands and models and configurations. Has anyone ever thought to do a similar comparison of desktop and laptop "fragmentation"?
The most important trend to notice is that Android operating system breakdowns are getting better.
4.x accounts for 60%~ market share. 2.3.x accounts for 34%.
Those are good signs.
edit: it's working now
This is some great information to think about concerning Android fragmentation, and how, perhaps, it's not actually a bad thing.
> 47.5% - Samsung's share of those devices.
It seems implausible that Samsung has made 5,934 distinct Android devices.
Diversity is good. Apple is not very diverse. Why not add something else?
edit: over 20 minutes now. I had to restart it.
My first suggestion would be to get similar shots to those you've already done with a female replacing the male. It's an easy way to add more shots quickly.
I also like the idea of more scenes including people. Make it look like they're having fun and also using the phone/tablet at the same time. Smiling faces sell products!
I would be glad to pay for each screenshot generated because saves plenty of time and simply makes presentations look way better.
There is a channel (you already have it), there is a real problem to be solved and it's something people would pay... Sounds brilliant :)
I think in terms of conversion, potential users emphatically see themselves using the product with their device with this type of frame (opposed just a screenshot with no device frame).
However, I would suggest several PC shots. Like Lenovo laptop, Dell monitor, etc instead of being so Apple-centric.
 - http://dribbble.com/shots/1023533-Moneys-Mobile-Digital-Wall...
But it's a nice idea!
Right now I have just posted my own images, but my goal is to get other photographers/individuals to add photos of different devices.
I am updating my webserver right now, but for now you can run it locally or deploy it yourself.
Otherwise: I like it!
(I realize there are one or two Android and Windows phones in the list, but still...)
We're sorry, but something went wrong.
We've been notified about this issue and we'll take a look at it shortly.
Would love some Windows-y machines as well, all laptops are Macs. (Which is what I use, but still live in a world of Windows laptops)
It is assumes that whoever is reading the article is similarly situated to that earlier version of the author both in terms of career circumstance and interests.
This is particularly clear in this excerpt:
I found that there are two types of people that power through the frustration [...] [t]hose that are really intellectually interested in learning to code. If you havent learned to code by now, its highly unlikely youre one of them.
By the way, I decided to "learn to code" at age 30, and I find it interesting. Then again, I was also full of platitudes when I was in my 20s.
To answer anecdotal proof with anecdotal proof, I studied finance and taught myself to build web apps. I didn't do it because I had to. I didn't do it because I couldn't stop myself. I just forced myself to do it the same way I force myself to memorize Chinese characters, the same way I force myself out of bed every morning. Willpower isn't some mythical ability granted to the anointed few. It's just asking yourself, what am I doing right now? Is it what I want?
There's also a troubling perception of what 'coding' is behind this post and many others. I write code for a living and I'm under no illusions about my abilities. As James Somers pointed out in Aeon, I'm a kid playing around with tools given to me by adults. Nobody like myself or the author is going to build a Rails, a V8, an Ember, a Heroku. If I learned how to use a brush I wouldn't call myself an artist. It's fine that we're becoming more abstracted from the machine's reality - thank God DHH didn't have to use punchcards - but with that abstraction should come a bit of humility about what we've actually learned. Because for web development, at least, it's mostly syntax.
I'd better stop before I exceed than the original post length. If you'd like a tl;dr, it is: fuck the author's position, my experiences contradict it, and the author is confused about what 'real' code constitutes. (However, I wholeheartedly agree with his suggestion to learn by building something you yourself want.)
It wasn't as technically difficult as most CS homework, but it was the first time I started thinking about programming as a tool to solve an actual problem I was experiencing.
Unfortunately, the time commitment becomes prohibitive to those that have to keep running the job/consulting treadmill and can't fall back on an investment banker salary (or similar) to fund their creative ambitions for a year or more. That unfortunately is the real answer to the post's title.
1. Nights, weekends are bad2. Forget codecademy3. Have a real project you want to build
Check out the full article here: http://blog.zackshapiro.com/want-to-learn-to-code-start-here...
And people who does not understand how systems so central in our society work is in the core of several recent political problems and conflicts.
> Oh and we're using Windows because games only exist for it, and I can't get the setup to work on OSX (haven't tried too much though).
This is a hack that goes: Thunderbolt -> ExpressCard -> PCI-Express. Two adapters is not quite so elegant, but whatever, this seems to work and I love it.
A 13" Air has 12 hours of battery life and weighs nothing and now you can dock it at home to game. Perfect.
For my work setup with OSX something like this would be great!
On a related note, does anyone else see the Scala logo and confuse it for a symbol representing databases or hard disks? It just doesn't click with me
Scala seemed a perfect fit. Until I looked at job boards for Brisbane :(
For those that use it regularly, what sort of things do you build in.it? I really want to replace PHP with something, but between Enterprise Language Java and just-as-dynamic-as-PHP Ruby and Python, Scala looks like the only language that might fit... but then Play as a framework looked very heavy for my usage.
Scala is wonderful and all the hard work put into it is greatly appreciated. However, every Typesafe product targeting the web that I've tried has given my browser indigestion.
Luckily the api docs and tutorials are unaffected and load quite quickly.
Generally I've not seen this handled gracefully
It also seems a bit premature, why not wait for guardian.news?
There are a few independent online news services but having the Guardian start an Australian branch is a welcome boost to democracy here. Visiting the Australian sub-site on a co.uk domain wasn't all that attractive for a country that has been independent since federation. So well done everyone at The Guardian.
The only time I ever use them is when my hands are tied by some sort of hardware restriction; a JNI wrapper around some native lib that talks to proprietary hardware like a motor controller where instantiation of more than one comms object would blow out a fuse or something. And even then, I'd make the argument that the people who designed the hardware and its corresponding C API should have just made a safer interface instead of saying "Don't call `new` more than once, or else!". Seems lazy and less-than-appropriately fault tolerant.
I believe that we should be able to come up with a set of (slightly different) rules that are simpler to explain and yield the same good designs.
Is it really better for me to pass the Clock instance all over the place, rather than have a singleton instance that can be referenced anywhere?
-- editWikipedia:If Square and Rectangle had only getter methods (i.e., they were immutable objects), then no violation of LSP could occur.
Leaving aside the issue of whether anyone buys into third wave feminism's definition of "privilege", this comes out as an indictment of every abuse reporting system ever made. It adds nothing and suggests nothing, only says "this sucks".
Actually, I take that back. It's impossible to leave that issue aside since you implicitly accept that definition to even make sense of this article.
Thing is, "abuse" is determined by the service provider, not the users. (And rightly so - or else you run into the problem mentioned where people flag something off and ruin someone else's day because their delicate sensibilities were offended..) Twitter and every other social communication site has a list of "thou shalt not's" which you are reporting when you click on the "flag" button.
Rather, this is the simple case of a "potential businesses party" who copied front-end code, design (obviously), and images. They also redirected traffic of an associated domain. So, beyond blatantly doing copyright infringement, they are also breaking fair trade competition/fraud laws and depending on jurisdictions, trademark (established through use in the marketplace rather than registration).
What you can do to protect yourself against such activities is simple, send a cease and desist letter, file a complaint to consumer protection agency (if you've got one), and possibly send the issue to the local police.
Their suggestion of registered trademark, watermarks, and (meh) patents might increase the reward money from a law suit and increase win chances in court, but it won't actually "protect" you against entities who already willingly commits copyright infringement.
The same principle applies when someone else has the same idea as you at the same time, and you catch wind of it right in the middle of your implementation of that idea. It might be slightly more difficult, since you arrived at the same good idea independently of each other. Just focus on executing better.
Ideas are a dime a dozen - they aren't worth anything in and of themselves.
If they blatantly copy your code, as is the case here, then it becomes slightly worse, an actual legal issue. But again I wouldn't worry too much since I'm willing to bet that they aren't that good at coding either.
Oh, and your site is down.
Other upshot is I get to give back to the FOSS community: couldnt have built it without them.
As the saying goes, if you can be copied that easily: you've built a feature, not a business!
+ business model innovation+ go to market strategy+ execution, focus and efficiency+ iteration+ user/customer care and cultivation+ brand integrity and trust+ vision+ interfaces+ partnerships+ etc
I would worry about a google penalty for duplicate content across the two sites. If they are really copying your code wholesale you can screw them over by defining the canonical urls for various pages (as your own domains).
While I agree that having your site copied verbatim sucks, I don't think contributing to the giant pile that is our current patent system is the way forward.
Especially not for a startup.
Also related, check out the work Japan has been doing for years
Autodesk 123D (which is free) can create a 3D model from just two photos.
And, in fact, preferably you don't pull my tree at ALL, since nothing in my tree should be relevant to the development work _you_ do. Sometimes you have to (in order to solve some particularly nasty dependency issue), but it should be a very rare and special thing, and you should think very hard about it.
I very often run `git rebase master` in my feature branches to avoid having many conflicts to resolve just before my pull request to master. Once merged in master, initial commits I rebased from master did not seem to have changed. Am I missing something, here ?
In that situation, you'll be the one blamed. It is unfair, but that is what will happen.
For a case in point, Microsoft encountered this one repeatedly in the 80s and 90s. (It didn't help that sometimes it probably was their fault..but most of the time it wasn't. It really, really wasn't.)
When I learned to drive, my tester required that I demonstrate competence with the controls, using them at the appropriate times, in the appropriate ways.
If my car exposed all of its private internal operations, I would have needed to know how to use them, and demonstrated that I know that. I haven't a clue about fuel flow and gear ratios, or air/fuel mixture or how the thermostat affects how the engine operates. I'm quite happy that I didn't have to demonstrate all of that too.
What the article ignores is that a good API provides everything the consumer needs, while keeping the API small and easily comprehensible. A driver who has to keep track of 5 details is more likely to learn to use his car more quickly, and less likely to crash than one who has to keep track of 200 details and make decisions about each one.
Case in-point (of which there are many, I'm sure): QExtSerialPort. The author needed access to underlying Windows functionality that Qt didn't publicly provide, however, there was this nice, private header file laying around they could use. The Qt team later decided they wanted to remove the contents of that file, because no one should ever be using it. Anyone who wanted to build QExtSerialport had to go and grab the original file, and put it into the correct location. If they had instead submitted a patch to Qt to fix the problem, many hours would have been saved.
The author might get more points with me if they added "keep private usage private," but instead they are advocating accessing private internals of 3rd party tools in new open-source projects, which restricts the original developer from making changes without impacting users of the third party tools. Privacy is important - if you want to go around it fine, but you have to expect the price for you and your users, to hand-wave around that is naive at best.
I'd argue that the post is too theoretical, in practice, enforced privacy works much better. People will do stuff against your advice. That's OK, you say - they'll get in trouble eventually, but it was their decision. However, it affects you, the library (app, etc.) developer, as your clients might turn out to be more powerful than you.
Think of Linus' rant on not breaking userspace. He's right I believe. In general, you are not allowed to break client code, even the client did something he was discouraged to do.
Interfaces are contracts. The ultimate documentation is the code, not the comment. You are saying that in the following case,
// do not access;
public int getSize();
comment has precedence over the visibility modifier. Well, no.
In FOSS libraries that I maintain, methods and properties that I don't want to expose are prefixed with an underscore and designated as protected. Protected members are not directly accessible, but anyone who wishes to play with them can create a subclass to access them. They also don't need to make any further changes other than subclassing, whereas private members might need to be overridden or (even worse) reimplemented depending on the language. So I think "protected" hits a nice balance between simplicity, openness, and maintainability.
The requirement to create a subclass to access protected members might come across as an inconvenience, but it sends the same message as the article's "dodgy" JS syntax: Here be dragons, tread carefully and don't blame me if your app breaks. It would be very nice if users understood that the leading underscore is meant to send the same message, but since they're apparently not getting the message, a little more inconvenience might be needed.
Then there is a matter of "well, it's not MY fault if you didn't use the public API and your code is now broken." I recall even Steve Jobs chastising developers for doing this.
Building something with a sensible yet strict privacy model takes a lot of upfront design. Makes sense for code that will be used by the masses, but maybe not for a small project.
Anyway, if you can't make it as a musician, I guess you have to do something else productive. Corey Doctorow gets it when he describes himself and other people who make a living in creative arts as being in the 0.001%, the extremely lucky few who get to have that as their career.
All professional musicians could disappear tomorrow and we'd still have enough great and varied music to listen to for all of anyone's lifetime and also have massive amounts of high quality participative music making by non-pros.
Incidentally, I'm a semi-pro musician myself but I long ago stopped fighting the I-deserve-a-living-as-a-musician battle and switched to figuring out how to be sure my contribution to the economy actually mattered.
Imagine a clone of Spotify. You can do offline streaming and such for all the music in it. You pay $10/month for the privilege (which I think is fair). $5 or so goes to the company to maintain their servers and such. The other $5 though you can do interesting things with. Basically, each month you have $5 worth of "tip money". You can send however much money you want to whatever artist you want.. or you could alternatively setup other neat dynamic setups (the author gets 50 cents when you favorite his song, etc). .. The point being that it's under the user's control. And, if the user chose to not tip any artists this month(or they have left over tip money), then it's evenly split between all the artists they listened to for the month (depending on song count or whatever).
This model I believe would work because the biggest thing standing in the way from giving your favorite band a tip is that it's through services that you are not already using. I don't want to go to their website and then sign up for paypal to give them a $1. The thing with this method payment is already accounted for. You already paid. Now you just get to pick which band deserves your money. This could even work with a free version by doing a model like for every 10 ads you must listen to in a month, you get 10 cents added to your tip money..
If I knew anything about the music industry, this would be the startup I'd be behind. This wouldn't work with the label model, and giving artists tips isn't really something I think most people would understand at this point... but some day... some day.
Spotify isn't mutually exclusive of the myriad other ways musicians can earn from their music online, many of which are described on this very same infographic.
Anyway, the CLQR  is by far the most useful CL book I've found. It's small enough to print and bind yourself, and the pages on LOOP & the type hierarchy are just pure typography.
I recently finished Let Over Lambda (finished the first read-through, anyway), and I almost wish I had started with it. CL is the C of the lambda calculi, but it didn't 'click' until the final chapters of LoL. With a sufficiently smart compiler (and by compiler I mean sets of macros), CL can do damn near anything.
ANSI Common Lisp is a great book, too, but I found the chapters oddly arranged (chapters 12,13 need to come first, maybe).
ANSI Common Lisp from Paul Graham is also a good CL textbook. I bought it as a complement to Practical Common Lisp and it also has a nice quick reference at the end.
Having read both On Lisp and Let Over Lambda, of those, I would recommend On Lisp more because it has more practical applications of macros, LOL is much more esoteric/playful/abstract, and not everybody is into that sort of thing.
Maybe I will look at Let Over Lambda.
HtDP should go before SICP. HtDP2 is a much better reading than old HtDP. In both books exercises must be done.
PAIP is a decent reading, but mr. Norvig, it seems, dislikes macros and recursion and prefers strictures and loops.)
btw, all the books are "available" on piratebay, if you are not too strict or american.)
As someone who followed her previous blog 'Creating Passionate Users', I'm really glad she's back writing publicly - not so much for this particular post (which wasn't anything novel), but more that it means her scars have healed enough. Hope to see more posts from her soon!
It's a minor detail, but an important one.
EDIT: It looks like the image has been updated. Thanks Kathy!
I think this article provides something of an answer: work in itself is not a bad thing. It takes effort and concentration - it's work - but it can be enjoyable, satisfying, meaningful.
But putting in effort that is wasted, by being diverted into tedious, pointless, unnecessarily complex tasks, is a bad thing. It's not enjoyable, not satisfying, not meaningful.
Therefore, any technological progress that reduces that tedium is a good thing (even if it has problems of its own, or exposes other problems, provided net tedium is less).
[I don't think this is the whole answer, but I think it's part of an answer (probably, things like saving lives, health, and somehow enabling people to relate better are more important goals).]
For anyone interested in her prior blog, Creating Passionate Users, I coped with her absence from the blogosphere by curating an e-book with all of my favorite posts.
You can grab a copy here:http://www.kevinmconroy.com/pdf/creating_passionate_users.pd...
Like many things in psychology, this is basically unfalsifiable. Our brains have pools of resources? How do you even differentiate between willpower and cognitive processing at a neurological level? It's one model, but there are other equally valid but also unfalsifiable explanations. What about anxiety goes up after working on a hard problem (memorizing a 7-digit number, apparently) - maybe you can test this by measuring cortisol levels - and so you choose the (stereotypically) more satisfying and rewarding dessert (cake) as a form of emotional eating and also, you know, rewarding yourself for a job well done?
I mean, it's basically just saying, "Use your brain, and your brain will get tired. Both solving problems and doing something you don't want to do count as using your brain." Sure, but I hardly need an experiment to tell me that.
Also, what about people who perform better under stress? Since it requires willpower to work hard and meet a deadline, and since the quality of your cognitive processing also goes up (for an initial period), doesn't that defeat the "competing for the same pool of resources" claim?
Psychology is great and a lot of the unfalsifiable stuff is valuable but it's irritating when it's dressed up as science.
She also ignores that for some people it takes more willpower to eat the cake. It can go either way depending on a person's ideas. She just assumes everyone has currently trendy ideas wherein fruit bowls are unpleasant but virtuous and people use willpower to eat them. But many other lifestyles are possible. For example, one might think cake is more delicious but they are scared of getting fat so it requires willpower to enjoy eating it instead of giving in to the fear, whereas the fruit bowl is easy to eat because there's no pressure against it, so it's the easy default.
Which had this quote:
"This isn't something that happens to some people online, it's something that happens to everyone who has ever put any of themselves out there for public consumption."
One thing that has confused me from the beginning, when Sierra first claimed that she had received death threats, was exactly why this story took on the scale that it took on. I recall at the time, of the 100 tech bloggers that I read on a regular basis, this story overshadowed everything else. I recall that previously I had been unsympathetic to Sierra because of the perception that she tended to rely on hyperbole and drama to sell her books. For that reason I was initially skeptical of her claims. Later it turned out that the 4 bloggers who harassed had clearly stepped over some line, and said some things that were at the least, very rude. As I recall, all of them later apologized (all of them were bloggers with some substantial reputations in the world of tech blogs). But given the amount of abuse that happens online on a regular basis, it seemed a little surreal to me that the story reached such a scale.
The subjects were told told to memorize a number, and on their way to a different room where they expected to be tested, someone stopped them mid-way and asked them to choose between two snacks -- a fruit salad and a cake. The people who had been told to memorize many digits didn't choose the healthy snack as frequent as the people who had been told to memorize few digits (and, presumably, could focus on which choice they really preferred).
It tries to convey "common sense" concepts, using conjecture and complicated constructs. It hurts my brain when I try to understand what is meant by "to use up cognitive resources". The more convoluted an explanation is, the less I feel it has been understood by the person explaining it. I have a strong distaste for psychology terms that add depth, but not clarity, as if trying to validate and give authority to the field or explanation.
A bit ironic for an article trying to explain the concept of "minimizing drainage of the cognitive tank" (to paraphrase).
So, what is this article really about? This -- http://www.amazon.com/Dont-Make-Me-Think-Usability/dp/032134....
Seems to me that a viable explanation for the first experiment is that heavy cognitive processing trips some circuitry in the brain that says "We got a lot of work to do. Get me some glucose."
She's back. I'm giddy as a schoolgirl.
Can test if the conversion funnel for cake (or low self-contro) goods) sell more after a more 'intense' work out on the site/app.
I don't see how that follows from the the memorization experiment. Maybe the people who could remember 7 items felt they worked hard so they deserved to be rewarded with a chocolate cake.
Anyway, the super cool insight of this article is the relationship between cognitive load and will power. We all knew "try harder" didn't work. Simplify everything else is a way more powerful way to manage your motivation and it makes it super clear that you can really only do a certain number of things. When your motivation turns to procrastination, it isn't some "problem" you are having, it is you simply hitting your cognitive limit for the day/week/month. Awesome.
If you spend the day exercising self-control (angry customers, clueless co-workers), by the time you get home your cog resource tank is flashing E.
This is particularly bad in the geek community, as we are used to high cognitive load (configuring X anyone?), and so we brush off any complaints about it as "stupid" or "computer illiterate."
One early app example is all the gas mileage tracking apps. Damn near every one of them in the early iPhone days had the spinning odometer control and the spinning gas number controls (where you spin each number up and down, like a key combo). I recall being infuriated by those designs, because all I really wanted to do was to quickly enter the odometer or the gallons and dealing with spinning those damn digits was NOT at all quick. Compared to the effortless/mindless act of typing into a digit keypad, spinner controls required much more cognitive load (did I spin too fast, will it go too far? Let me catch it at the right digit. Which digit do I need to push up or down to make it match what's on my real odo?).
We just don't know a lot about how cognitive resources are utilized. Long distance runners know this. Athletes know this. The whole concept of "second wind". Where they find the strength to better their game using way less resources -- after they have been tired. Some type of cognitive resource depletion gives people even more energy and motivation.
While I agree that things should be made simpler and we shouldn't over-gamify things, I don't think we should make decisions with the cake / fruits question in mind. That just provides a framework to dumb things down. We will never enable the users to hit their second wind if they never get tasks that make them crave cakes.
I guess my point is: simplicity is good. Simplicity to the point of dumbness is not.
It certainly seems that highly successful, highly visible people (creatives, executives, politicians) tend, disproportionally, to exhibit behavioral problems (addiction, suicide, etc.) I don't know if it really is disproportionate, but if so, is it related to their exertion, or depletion, as the author puts is? Is it the visibility and the accompanying scrutiny? Maybe it's the other way around, and the underlying psychological makeup propels short-term performance.
Very interesting stuff, especially in context of burn-out.
Researchers were astonished by a pile of experiments that led to one bizarre conclusion:
Willpower and cognitive processing draw from the same pool of resources."
Bizarre, all right. Unless the subjects were wrestlers or models, why should the choice of fruit v. cake involve self control at all? If you wished to argue that they thought they deserved more of a reward, I might be willing to consider that.
And are we talking about seven numbers vs. two numbers (as in the illustration) or seven-digit number v. two-digit numbers, as in the text?
Just a speculation.
I do think though while you might be drawing from one 'pool', it's a pool that you can work to expand. To me this seems to be the same vein of psychology that makes ADHD medicine ineffective for kids on the long term. There's one pool of resources you are drawing from but like muscular strength you aren't doomed to your current limits.
The guys who memorize numbers might associate a cake with a reward and choose it just in order to reward oneself for a meaningless and boring waste of time they choose by mistake, while in 2 digits group it wasn't counted even as a joke.
As for willpower/self-control - hormonal levels are almost always the major factors. Just do the silly experiments which are "considered unethical" involving "images from those magazines" and you will notice lots of correlations.)
The famous experiment with tape-recorded heart-beats is the beautiful one.
Again, trying to find a single cause in psychology is kind of naive. The theoretical framework advocated by Marvin Minsky of constant competition of multitude of semi-independent agencies (specialized regions of the brain) helps to develop the notion of multiple causation.
My guess is that if one would nail a poster of a fit bikini girl to a wall, the number of cakes chosen will be reduced dramatically, everything else being equal.
But for a pony psychology the article is perfectly OK.)
That being said, I am glad that she has finally verbalized what I have always felt.
As the only person running 5KMVP, I have always found that it is hard for me to do things like marketing, and customer relations/support on the same day I do development.
That would also negatively impact my performance of both.
But now that I have people working with, I can concentrate on interacting with my clients without feeling guilty (i.e. knowing that the rest of my day is dead, from a development perspective).
Also, this explains the logic behind Steve Jobs always choosing a black turtleneck, blue jeans and sneakers. If he has 1 less thing to make a decision about, his life is much easier. I have recently adopted that, and am trying to simplify my wardrobe as much as I can.
This also impacts how I schedule 'outside' events. If I have to go to an event outside of the house, that usually means no coding for me on that day. I can't quite explain why - other than the mere fact that I know I have to go out, is enough of a distraction to make me not be able to 'get into the zone'. Glad to know that I am not deficient in anyway, and it is just being depleted from the same 'cognitive tank'.
Seriously, I thought the article was great. It would be great even if it wasn't written by Kathy Sierra.
As for the willpower situation, on a tangent, I really believe that the notion of willpower as a useful ANYTHING is outdated and badly needs to be replaced.
The reality is we are smart people who understand our brains, and can reprogram it. Using emotions and basic urges to create motivations and positive feelings about the things we NEED to do but typically dislike doing is the key here.
Luckily there is a group that is teaching these skills outside the normal context of "self help" that turns off oh-so many people.
It would be interesting to see an experiment that 'cognitively taxes' participants by having them perform a task that is not considered positive. Memorizing a number elicits a feeling of accomplishment that may contribute to the justification I described above.
What I would like to know is how can we grow this limited resource?
Doing the same with iOS development is painful. Apps for the simulator end up in arbitrarily named directories so you can at least inspect their sandbox and can be invoked via extremely long command lines. But forget about apps on the device itself. libimobiledevice has reverse engineered some of it, but for example there is no way to start or stop an app from the command line.
I was doing some FTUE work on both Android and iOS with a third party app, and needed to stop it, clear the data and start it again. For Android I just had to press up arrow and return. For iOS I had to do multiple gestures on the device, then use an app named iFunBox (really) to manually clear out the sandbox, and then launch the app again via touch.
 First Time User Experience
You can't find documentation or do with easy most things without the IDE. and the IDE often assumes things that get you by surprise, such as saving the project files when you change something in a preference dialog, and not providing a Undo for that, and not telling you all the files modified by such action.
The biggest issue and something that has gotten me into a state of white-hot rage has been Apples certificate/provisioning profile nonsense. I don't think I've ever gotten a profile to work from the get go, even just for development(a requirement thats positively ludicrous). That's why I generally develop/test on Android first.
Seriously, I've managed to require a whole week just because of some certificate snafu.
Every iOS developer has to go through that learning curve. It is part of the initiation process, unless you want to stick with straight SQLite. CoreData becomes a merit badge of honor. Every developer has their war stories about NSPersistentStoreCoordinator, PSCs on multiple threads, threading, performance, sorting, etc.
Quick note on performance in CoreData. If you need to cache objects that you use frequently, make your own in-memory cache. CoreData is not optimized for caching objects.
But really, CoreData, is something most people who move from iOS dearly miss. Not everyone wants fine grained SQL-level control over persisting data. There is no equivalent in Android. Nada. OrmLite and some other libraries have tried. Where most of the Android OR persistence libraries break down is either m:n relationships or performance or both.
However, times may be a'changing - maybe CoreData and some of its pain can be abstracted itself - if I were to advise a new iOS developer - assuming their requirements for persistence weren't too complicated - I'd tell them to go with Parse for managing backend persistence or http://helios.io from Matt Thompson (of AFNetworking).
> Neither Android or iOS support this "Flow Layout" natively
I don't know about Android, but iOS has just that : UICollectionViewFlowLayout. It would be trivial to implement a tag list as he did.
However, if you want to do something a little more interesting, particularly with any kind of interactive multimedia, then Android makes things harder or downright impossible.
Yes the Android emulator can be very slow but testing on a real device is very quick without the hassle of certificates.
 Windows and Blackberry as well
On a previous stable version I had to suffer a month without code completion (well broken code completion) and there were many complaints online about it, but no fix (at least none that worked for me). This did teach me to memorise more and type faster.
Also half-baked things like Storyboards & IB which you cannot really use for actual apps because you need code to add images, custom fonts etc to controls and the often buggy code generation for Coredata makes me think that this has no priority for Apple. It feels outsourced (as in, thrown over the fence with a vague spec) and more neglected with every new version, making me think it's some kind of arrogance; let developers do everything the hard way, they cannot do without us anyway. Every story and tutorial I read seems to back this up; working around the quirks in the toolchain instead of the tools helping you. I keep wanting to believe i'm doing it wrong, but I haven't met anyone yet with a better experience.
It's not like game dev is one of the most complex kind of development and requires game design as well as low-level graphics programming skills.
Mother of god. Simply using layout constraints instead of auto layout would have saved > 1000 lines of code I would estimate.
Current problem: a quantum system is used to do a calculation, but it decoheres to fast to be useful, or to scale.
Possible solution: do part of a calculation, shove the result in one of these light storage thingies, decohere, reboot and re-establish coherence, feed in result from light storage thingy, continue calculating.
I also enjoyed how they went from:Heinze, Hubrich, and Halfmann -> H and H and H -> the three H's. I kind of expected it to go to "triple H"
EDIT: Note to self, reading is tech.
* Permitted at the domain apex (yes really! unlike CNAMEs!) * Allow weighted round-robin * Allows lower-priority fallback services * Unusual port numbers no longer required in URIs * Doesn't get confused with non-HTTP services located at the same FQDN.
Fortunately the standard (nor as far as I can see, the normative references) doesn't actually say you have to use an A-type record. Unfortunately that will remain the convention unless someone makes this easy but explicit change.
I'd get involved but I fear the politics. Would I have any chance of being able to advocate for this change?
I just feel that HTTP should not remplement TCP. SPDY/HTTP2 just seems much more complex than necessary.
http://jimkeener.com/posts/http is a 90% complete post of what I would like to see as HTTP 1.2 and some other things I think would be beneficial.
As far as I know, this is not true. Server Push is only for the server and can only be done as a response to a request. It's not a WebSocket alternative.
Server Push means that when a client sends a request (GET /index.html), the server can respond with responses for multiple resources (e.g. /index.html, /style.css and /app.js can be sent). This means the client doesn't have to explicitly GET those resources which saves bandwidth and latency.
At the same time I also submitted another article that I still think is interesting and relevant as of today:https://news.ycombinator.com/item?id=6014976
Here's stuff that backs my argument:s
And here are more viable and real alternatives that not only increase the speed by a factor of n, but also increase security and compatibility to our mobile generation:
http://roland.grc.nasa.gov/nrg/local/sctp.net-computing.pdf / http://tools.ietf.org/html/rfc4960
PS: I was initially afraid that HTTP2.0 was optimized for Advertisers...pheww
What I thought was original in this approach was how can you sue the manufacturer of a kit plane over its design if it's open source? Course lawyers will surely try so it has to be tested in court first.
Dictionary.com has 2 definitions:
1. Computers. pertaining to or denoting software whose source code is available free of charge to the public to use, copy, modify, sublicense, or distribute.
2. pertaining to or denoting a product or system whose origins, formula, design, etc., are freely accessible to the public.
I guess the latter definition could apply, where the components can be produced (e.g. 3D printed) from detailed "open source" blueprints? However, I dont think the plane will be built from 100% "open source" rendered components. Generic or branded components may need to be purchased as well.
Using their plans as a guideline for building an ultralight instead of a light sport aircraft would reduce the cost dramatically. Ultralights have much lower limits but don't require any formal training/licenses.
In the end you could probably build an ultralight for the cost of a pilots license.
Anyway: This whole thing seems wildly narcissistic. Who gives a shit if he moves in with his girlfriend or gets a vasectomy? These are issues to be decided between KMikeyM and his friends/family, not someone who gave him a few bucks for some fake "shares." Especially if he doesn't know that someone personally.
The main takeaway for me is how worrying the chosen attack vector was, and what it says about the state of the USA. Think about it - the most effective way to remotely cripple someone you hate is to turn the USA's ridiculous drug enforcement apparatus on them. It's not a bomb or insults, or any kind of direct or overt physical harm, it's simply mailing them a narcotic and tipping off the police.
What if this guy hadn't been monitoring things? He could very well have been in a nasty, highly stressful, possibly career ending situation simply due to America's stance on drug enforcement.
I am not sure if that's a good or a bad thing - if grandpa is not online can be do video calls to the grandkids ?
The second is of course - I do not monitor these boards and of course the next attempt will not be public. Not sure how to react if a dozen baggies got delivered. Hand it over to the cops I guess.
It is the makings of an interesting real life DDoS attack on politicians for example
I'm surprised at how personal these attacks are. Is it that common for public security figures to be at such risk ?
Seriously, 'goons'? What is this, an Archie comic?
But, yes, fascinating article, nonetheless. I dislike the man for reasons difficult to articulate, but there is no arguing with a story like this. Great read.
But this one can easily ruin someones life. Or at least give enough stress to shorten it for a while. Not even talking about legal expenses to prove it's not yours. I mean... Police finds reasonable amount of Class A drugs at your place. 'it's not my' 'yeah right, everyone says that'.
Linux distros mitigate the cold boot entropy problem by saving some state from the RNG on shutdown (on Debian, it's saved in /var/lib/urandom/random-seed) and using it to seed the RNG on the next boot. On physical servers this obviously isn't available on the first boot, and on cloud servers, the provider often bakes the same random-seed file into all their images, so everyone gets the same seed on first boot (fortunately this doesn't harm security any more than having no random-seed file at all, but it doesn't help either). What cloud providers should really do is generate (from a good source of randomness) a distinct random-seed file for every server that's created, but I haven't seen any providers do this.
Many people, especially beginners, make the mistake of leaving the same SSH keys in a certain template or in a snapshot of a virtual machine that they later use as a template.
There are a few files that you really, really need to wipe out from a wannabe image template:
- /etc/ssh/* key* (for reasons explained in the parent article. stupid autoformatting, remove the space after the first asterisk)
- /var/lib/random-seed (the seed used to initialise the random number generator. this is the location on CentOS)
- /etc/udev/rules.d/70-persistent-net.rules (so that the VM's new NIC - with a new MAC - can use the same "eth0" name)
People who want to do this more exhaustively can have a look at libguestfs and it's program virt-sysprep which does all of the above and more!
Cowboy images like this are exactly the reason trademarks exist. Commercial providers who don't get certification are in fact violating Ubuntu's trademark by telling you that you are getting Ubuntu, when in fact you are getting a modified image which is possibly compromised (such as in this case).
If our IP address changes (eg. ISP assigns a new one for the cable modem) then we just update the whitelist (and remove the old address). It's very infrequent. I could probably count the number of times I've done it on one hand.
It might not be the most scalable setup but at our small size with everybody working from home it works great.
The only slight hitch is updating it when traveling but even that isn't much of a problem. It takes a minute or two from the AWS console and its good to go.
I recently took a look at digital ocean ($5 servers gives me ideas...) but didn't see a firewall option similar to the security group setup in AWS. If it does exist then I highly recommend it.
I believe if your version of OpenSSH is up to date, sshd will read the host key each time a session is opened and does not need to be restarted.
I had loaded up an Ubuntu Desktop droplet with the purpose of checking something out through the browser on the node.
The startup page was https://www.americanexpress.com/
Since when is that default?
Didn't think much of it at the time, but now... whoa.
Somewhat related: chicagovps gave me a 'fresh' gentoo vps, and the default provided root password was identical to the original one from several months ago. I assume it is one gentoo image with the same password (for all customers)?
If you are still reviewing salt, I just wrote a post about salt-cloud and DigitalOcean that you should check out -
Create your own fleet of servers with Digital Ocean and salt-cloud:
The claim that Danes are ambitious is flat out wrong. The younger generation more so, but there is a reason why danes are the "happiest people" in the world and it's not because of their ambitions.
The Danish model is under huge pressure and haven't escaped the reality of globalisation and automatisation.
But because wealth gets distributed the way it does, it doesn't feel the heat too much just yet.
In other words the Danish system is a thing of beauty as long as it works. Unfortunately it doesn't work anymore and somethings gotta give.
Edit: Was asked to be more specific.
Out of 6m people:
Almost 0.8m people on some sort of social welfare
Almost 0.8m people are working in full time positions for the public sector.
In comparison 1,9 in the private sector and it is shrinking rapidly.
It is notoriously easy to start a company in Denmark but notoriously hard to grow it among other things because most Danes don't have those ambitions and are very very risk-averse.
We are long past the point where more people are depending on the state than on the private job market and as those jobs disappear because of the named automation and globalisation and because Denmark is just too expensive, it will be hard for any government to promise the elaborate system we have now.
This is already starting to show as the latest government scramble to lower taxes for corporations and reduce the number of entitled benefits Danes can expect.
Furthermore Denmark took the wrong educational strategy and unfortunately like most of the european countries believed that knowledge worker meant book reader.
The result of this is that we have a large over educated part of the population who will have a very hard time finding a job.
I've only been to Denmark a few times but I think the feeling I got is that they are happier because they have managed to largely sidestep the otherwise ubiquitous trap of perpetually escalating expectations.
The American/capitalist model is that each achieved goal is a platform for the next goal. Growth is what matters. Being satisfied with a decent job and a peaceful context in which to love your family is not any less ambitious than desiring to get rich or "change the world." It's just ambitious in a different direction.
Americans, for example, optimize for economic performance. Danes, I think, optimize for happiness. The tantalizing, troublesome idea that captivates me as an American is that money as an abstraction of 'value,' when survival is assumed, might only be desirable as a tool for being happier.
And if the pursuit of money, on a societal level, interrupts the pursuit of happiness, that implies that we capitalists are doing it wrong.
It's entirely possible to have a successful career working 9 to 5, with a sensible commute, in vast swathes of the UK. I've spent 4 years in Edinburgh and have always lived within a 10 minute walk to work, often within a World Heritage Site.
The childcare costs are however a good point of something that other Northern European countries tend to do better at. Though our recent government spat over the ratio of staff to children shows that the public just aren't rational on the issue so changing it would be hard.
But, as Keynes said, "In the long run, we're all dead." Do you plan on listing your git check-ins on your tombstone? Will your epitaph be "He was ambitious?"
Most of us on this forum are fortunate enough to enjoy most of our work and are well-paid to do it. Much of our work has novel interesting and innovative results. Many of us are happy to keep working as long as we can. All the greater shame on us for not having the imagination to visualize what life is like outside our fortunate circle.
Well the Brits have an institutionalised system of honouring people who achieve. Everyone from sports people to business people. Obviously it's incredibly political but it is taken very seriously.
 - http://en.wikipedia.org/wiki/Law_of_Jante - http://en.wikipedia.org/wiki/British_honours_system
1. Taxes are ridiculous (my friend is in the 65% bucket, sixty five, after 2 years of work experience).
2. There is a huge amount of people living on welfare State.
3. The University system is far from ideal. Not much competition, grades are given almost randomly and tend towards a political 7 (average)
4. You pay 180% (one hundred and eighty) of taxes on top of the value of a new car (180%...) when you buy it
5. The cost of living is high, very high
6. Foreigners, if they are not from US, are not very welcomed (say what you want on this, it's been my and others impression). Compare this with, say, Germany, and you see a big problem right there
7. There is not really much good work. I get it, 5-6 mln people, but finding a work in Denmark without Danish is like finding the eldorado
Overall, I would not take this country as a "model". If you want a model, take Germany. Germany managed to get out total destruction (world war II) and the whole West/East mess without asking help to anybody. Germany today is probably one of the most open countries to foreigners. The Police in Germany is great with everybody (I had my bad experiences with the Danish one doing absolutely nothing wrong...).
Sometimes you need to look deeper to see what's really going on in a country. "Working hours" should not be the only way we measure things.