Let's not forget the CEO, who committed and risked resources on a hunch or instinct or who-knows-what.
If I had to pick one of the two to ask how they had the nerve to act and to learn from, I'd pick him. (Of course I'd prefer both and not to belittle her gumption and skills to back it up).
- What did he see to suggest risking those resources? ... To create a team of outsiders to work on the core app?
- How likely did he expect things to work out?
- How did he explain the expenditure of flying the others in to the CFO or whomever?
- Or did he make a unilateral decision without asking others?
- Did he just get lucky?
- Had he done things like this before and succeeded? Failed?
- Was he worried about making waves in his organization? Did he?
Plenty more questions pop up...
I mean I want it to be true. I'd like to live in a world where it's true. But I don't actually have any hard evidence besides the testimonials of people who have found success, and I'm not sure if it's survivorship bias or an accurate picture of how one can become successful.
My theory is that chances of success are incredibly variable, and trying hard and putting yourself out there will increase that chance, but I can't figure out a ballpark for the baseline.
Is it even possible to crunch the numbers on something like this? I feel like we'll never know.
Knowing you gave it a shot is the important part, I suppose.
The interesting bit, I think, is how the protagonist challenged the traditional relationship between employer and job seeker. Instead of pandering by praising Uber's design, she had the guts - possibly because of whiskey :) - to offer thoughtful criticism of the product.
As someone who recently finished a tough job search, I found this concept very liberating. Following the traditional process - researching a company's best features, trying to say the right things in interviews, waiting for callbacks - can feel discouraging. It can be like a bad round of speed dating.
Finding a creative, respectful way to point out a company's flaws is an innovative approach that, when done appropriately, can shift the ball back into the candidate's court.
I've always wondered: what did it take for him to come to <em>believe</em> that this wasn't crazy, but real? To have the courage to say it publicly?
We'll never know. Unfortunately, not long after his great discovery, in November 1958, Taniyama committed suicide. He was only thirty-one. To add to the tragedy, shortly afterward the woman whom he was planning to marry also took her life, leaving the following note:
<blockquote>We promised each other that no matter where we went, we would never be separated. Now that he is gone, I must go too in order to join him.</blockquote>
. . . In his thoughtful essay about Tayniyama, Shimura made this striking comment:
<blockquote>Though he was by no means a sloppy type, he was gifted with the special capability of making many mistakes, mostly in the right direction. I envied him for this, and tried in vain to imitate him, but found it quite difficult to make good mistakes. (94) </blockquote></blockquote>
Edward Frenkel, <a href="http://www.amazon.com/Love-Math-Heart-Hidden-Reality/dp/0465... and Math: The Heart of Hidden Reality</em></a>, which is recommended.
What mistakes have you made lately?
That said it really helps if you're already hanging around at a party with a CEO of a big, in the news, growing startup and thus have insider access to tell him exactly what you think after a few drinks.
Although, I appreciate that the designer took his offer seriously. More importantly, it's the guts of a founder in this case that made it happen.
I've done similar things sometimes (not playing at that level but, similar in the end) and what I felt was a big discomfort and a really huge passion for something.
You want to defend your values.
These two together creates a willingness to change the status quo and make something better can move mountains. At the same time I can say this seconds you are terribly fragile.
Not even courage is needed. That's why it's so difficult to explain, it's something you feel inside and need to get out.
Amazing how societies all have this same period, whether it's college or 3 years in the woods. The rules are weirdly the same - "there's a time for everything, and it's college" translates to a random tribe in Africa almost literally.
You do know that was a coincidence right? or even a miracle given how well things ended up. Has the uber guy been a little pissed or in a bad mood the results could have been drastically different, and yet your blog-worthy suggestion is to take a leap of faith and see if it works.
Who cares, we are telling unexperienced, unprepared and even untalented kids to quit school and launch a "startup", whatever that means now.
Intro will enable LinkedIn to have the IP address of all of your staff using it, and thus (from corp Wifi, home locations of staff, popular places your staff go) they will know which IP addresses relate to your staff members (or you individually if you are the only person on a given IP).
This means that even without logging onto LinkedIn, if you view a page on their site they can then create that "so and so viewed your profile", which is what they're selling to other users as the upgrade package to LinkedIn.
Worse than that, as a company you can pay to have LinkedIn data available when you process your log files, and from that you know which companies viewed your site. And that isn't based on vague ideas of which IPs belong to a company according to public registrar info, this is quality data as the people who visited from an IP told LinkedIn who they were.
Think of that when you're doing competitor analysis, or involved in any legal case and researching the web site of the other party.
And VPNs won't help you here, as you'd still be strongly identified on your device and leaking your IP address all the time.
There are so many reasons why this LinkedIn feature needs to die a very visible and public death, and very few about why it should survive. It's a neat hack for sure, but then so were most pop-up and pop-under adverts and the neatness of overcoming the "impossible" is no reason this should survive.
1. Attorney-client privilege.
I'm guessing most law firms use third party email servers, anti-virus, anti-spam and archive/audit systems which this would also apply to. It would also apply if you're using Raportive, Xobni or the like (or integrated time-tracking, billing, crm, etc.).
2. By default, LinkedIn changes the content of your emails.
Irrelevant. Unless you read your emails in plain text every modern email client changes how email is displayed.
3. Intro breaks secure email.
Yes. Except iOS mail doesn't support crypto signatures anyway.
4. LinkedIn got owned.
Yes. LinkedIn adds an extra point of vulnerability.
5. LinkedIn is storing your email communications.
Well metatdata but yes.
7. Its probably a gross violation of your companys security policy.
Yes. As is using Linkedin itself. Or Dropbox. Or Github. Or Evernote. Or Chrome. Or any enterprise software that uses the bottom up approach.
8. If I were the NSA
The NSA has access to your emails if they want them anyway. Email isn't a secure protocol against a well funded adversary.
9. Its not what they say, but what they dont say
10. Too many secrets
These all seem to be questions that can either be answered by testing or ones that LinkedIn would probably be happy to disclose, but unlikely to be major issues to mainstream users.
So fundamentally it comes down to two points, granting Linkedin access to your email creates a new point of attack and Linkedin themselves might use your email in ways you find undesirable.
So it's essentially a trade-off for the benefits you get from the app versus those risks. For a personal account which you use for private emails, personal banking, etc. the evaluation is obviously going to be very much different from say a salesperson's work account which they use for managing communication with leads.
In the later case they may already be trusting LinkedIn with similar confidential information and already use multiple services (analytics, crm, etc.) that hook into their email so the additional relative risk might be smaller.
As people with technical expertise we shouldn't use scare-mongering to push our personal viewpoints upon those with less expertise, but rather help people understand the security/benefit trade-offs that they're making so they can decide for themselves whether to take those risks.
It's important to treat the wider non-technical community with respect and as adults capable of making their own judgements and not as kids who need to be scared into safety.
This is really just a case of well-branded spearphishing. You should already be protecting against that.
Really? I guess you better have your own SMTP server set up then, or hope your email provider is willing to go to bat for your rights...
> 8. If I were the NSA
Yeah, it sounds like they definitely have needed it so far...
5 other of the things are basically the same point, remade in 5 different ways. This is a really weak list. There are certainly concerns, but most of these problems are symptomatic of our email system as it is. And have we all forgotten how crazy everyone went when we found out google was going to start advertising in Gmail?
Serious questions though, if you are an IT shop - how do you defend against this trojan horse app?
We wanted to provide additional information about how LinkedIn Intro works, so that we can address some of the questions that have been raised. There are some points that we want to reinforce in order to make sure members understand how this product works:
- You have to opt-in and install Intro before you see LinkedIn profiles in any email.- Usernames, passwords, OAuth tokens, and email contents are not permanently stored anywhere inside LinkedIn data centers. Instead, these are stored on your iPhone.- Once you install Intro, a new Mail account is created on your iPhone. Only the email in this new Intro Mail account goes via LinkedIn; other Mail accounts are not affected in any way.- All communication from the Mail app to the LinkedIn Intro servers is fully encrypted. Likewise, all communication from the LinkedIn Intro servers to your email provider (e.g. Gmail or Yahoo! Mail) is fully encrypted.- Your emails are only accessed when the Mail app is retrieving emails from your email provider. LinkedIn servers automatically look up the "From" email address, so that Intro can then be inserted into the email.
After all, we are talking about the same team more or less, and surely the same company who owns Rapportive today.
If my concerns are real. One might find this is ironic that Rapportive was backed by YC and Paul Buchheit, the creator of Gmail, and now this very company violating GMail users' privacy.
What does the sig it appends look like? I will have to make sure to never send email to anyone who has the tell-tale "I opt into spyware" flag.
> These communications are generally legally privileged and cant be used as evidence in court but only if you keep the messages confidential.
"LinkedIn Founder says 'all of these privacy concerns tend to be old people issues.'"
The bit about privacy starts at the 13 minute mark.
In the first instance I thought this was an app that was running in the background on your phone, I would have called that doing the impossible. This is just a MITM, and not a very good one at that.
It's interesting this "blog post" came from a professional security company who makes it money from scaring individuals and companies about security threats.
Is it just me, or is this firm even worse than LinkedIn?
When my friends ask for laptop-buying advice I tell them if they like the keyboard and screen, then its just plain hard to be disappointed with anything new.
I think I can pinpoint when this happened - It was the SSD. Getting an SSD was the last upgrade I ever needed.
Above that, PCs aren't necessary for a lot of people, because people do not need $2000 Facebook and email machines. For the median person, if you bought a PC in 2006, then got an iPad (as a gift or for yourself) and started using it a lot, you might find that you stopped turning on your PC. How could you justify the price of a new one then?
Yet if there was a major cultural shift to just tablets (which are great devices in their own right), I would be very worried. It's hard(er) to create new content on a tablet, and I don't really want that becoming the default computer for any generation.
I think its extremely healthy to have the lowest bar possible to go from "Hey I like that" to "Can I do that? Can I make it myself?"
I think its something hackers, especially those with children should ask themselves: Would I still be me, if I had grown up around primarily content consumption computing devices instead of more general purpose laptops and desktops?
Tablets are knocking the sales off of low-end PCs, but we as a society need the cheap PC to remain viable, if we want to turn as many children as possible into creators, engineers, tinkerers, and hackers.
The limiting factor is if your computer's feedback loop is tighter than your brain's perception loop. If you can type a letter and the letter appears, your computer is fast enough for word processing. But, if you can run a data analysis job and it's done before you release the "enter" key, it just means you should really be doing better analyses over more data. Certain use cases grow like goldfish to the limits of their environment.
A lot of people don't want to cook, so are happy with smartphones and tablets.
Why buy a desktop or laptop when an iPad will do everything you need for a fraction of the price? That's what people mean when they sound the death knell for the PC.
The only time I felt like I've needed an upgrade is while playing Planetside 2, which is/was very CPU bound for my setup. However, when it was initially released, Planetside 2 ran like a three-legged dog even on some higher end rigs. It's much better after a few rounds of optimizations by the developers, with more scheduled for the next month or two.
I dual boot Linux boot on the same machine for my day job, 5 days a week all year. For this purpose it has actually been getting faster with time as the environment I run matures and gets optimized.
As good as it is now, I remember struggling to keep up with a two year old machine in 2003.
The Post-PC devices (tablets / smartphones) are it for the majority of folks from here on out. They are easier to own since the upgrade path is heading to buy new device and type in my password to have all my stuff load on it. If I want to watch something on the big screen, I just put a device on my TV. Need to type, add a keyboard.
The scary part of all this is that some of the culture of the post-PC devices are infecting the PCs. We see the restrictions on Windows 8.x with the RT framework (both x86/ARM), all ARM machine requirements, and secure boot. We see the OS X 10.8+ with gatekeeper, sandboxing, and app store requirements with iCloud.
The PC culture was defined by hobbyists before the consumers came. The post-PC world is defined by security over flexibility. Honestly, 99% of the folks are happier this way. They want their stuff to work and not be a worry, and if getting rid of the hobbyist does that then fine. PC security is still a joke and viruses are still a daily part of life even if switching the OS would mitigate some of the problems.
I truly wish someone was set to keep building something for the hobbyist, but I am a bit scared at the prospects.
1) Yes, I'm one of those that mark the post-PC devices as starting with the iPhone in 2007. It brought the parts we see together: tactile UI, communications, PC-like web browsing, and ecosystem (having inherited the iPods).
2) I sometimes wonder what the world would be like if the HP-16c had kept evolving.
In high school I recall lusting after a $4,500 486DX2 66Mhz machine with an astounding 16MB (not GB) of RAM, and a 250MB hard drive. A few months ago I spent a little less than that on a laptop with 2,000X that amount of RAM, 8,000X that amount of hard drive space, and a processor that would have not so long ago been considered a supercomputer.
I for one am glad that we have continued to innovate, even when things were good enough.
My dad went to Walmart and bought a computer (why he didn't just ask me to either advise him, or ask if he could have one of my spare/old ones I don't know) and monitor for $399.
It's an HP powered by a AMD E1-1500. It's awfully slow. Chokes on YouTube half the time. My dad is new to the online experience, so he basically uses it for watching streaming content.
I could have grabbed him a $99 Athlon X4 or C2D on craigslist and it would better than this thing. I'm not sure if he'll ever experience a faster computer so I don't think he'll ever get frustrated with this machine, but it's amazing that they sell an utter piece of shit like this as a new machine.
It's that when tablets hit the scene, people realized they don't need their PC for 90% of what they do on a "computer". Email, social networking, shopping, music, video etc.
Us old geeks who swap hardware, play PC games, tweak OS settings and generally use yesterday's general purpose PC will be the ones remaining who keep buying new hardware and complete machines.
The general public meanwhile will only buy a PC if their tablet/smartphone/phablet needs expand beyond those platforms.
The market will shrink but it will turn more "pro". The quicker MS evolves into a modern IBM the better.
I'm still running fine with my 2007 Macbook, but I think my iPhone has extended its life because now my laptop almost never leaves the house and sometimes doesn't even get used in a day, whereas pre-smartphone I used to cart my laptop around rather frequently and use it every day.
I'm hoping that a new generation of largish (24-27") 4K displays will lead to a rebirth in desktop PCs, if only because we depend on them so much for professional work where they've fallen behind in experience when compared to high-end laptops, which shouldn't be the case!
Just because it doesn't sit in a big box doesn't mean it's a different class of system. The difference is really the openness of the platform, comparing something like iOS to Win 8 pro.
That said, many tablets are basically what we would have thought of as PCs before. Consider something like the Samsung 500T or similar, or thinkpad helix. Components are small and cheap enough that they can be packed behind the LCD, and you have essentially a laptop that doesn't need it's keyboard.
Will iPads take over PCs? No. They are too limited, not because of hardware, but because of OS limitations. Will tablets take their place though? Quite possibly. The portability is quite handy. That I can dock a tablet with a keyboard and have a normal PC experience, but have it portable when I need it is a selling feature.
The obvious cavaet is that a limited OS is fine as long as the majority of data is cloud based. In that case even development can be done on a closed platform, and the tablet becomes something more akin to a monitor or keyboard. More of a peripheral than a computing device. We might get to that point, but that's not the cause of the current trend.
This is the number one reason why I love the PC above any other kind of computing machine. Need more disk space? Sure, go get a new disk, you may not even need to remove any of the others. Want a better graphics card for that new game? Easy as pie. Your processor died because the fan was malfunctioning? Too bad, but luckily those two are the only things you'll have to pay for. The list goes on.
I bought my current PC on 2009. The previous one still had some components from 2002.
They bought a windows machine for what to them is a lot of money (more than a iPad), it didn't last long before it slow and it's got extra toolbars and all sorts of rubbish. What's worse is that this happened last time they bought a PC and the time before and the time before that. They are not going to add a SSD because that's not how they think + they don't how + it's throwing good money after bad + they are dubious of the benefits.
The iPad in contrast exceeded expectations and in the year or two they've had it they had a better experience. They can't get excited about a another windows machine because it's expensive, more of the same and not worth it really.
Back in Q1 2010 I got an Intel Core i7 980X which benchmarked at 8911 according to http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7+X+980+...
Now in Q2 2013 (3 years later) the very top of the line processor available, an Intel Xeon E5-2690 v2, is only twice as fast at 16164: http://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E5-2690+v...
It used to be that things got faster at a much faster rate. And until this new E5-2690 v2 was released, the fastest CPU was only 14000 or so, which is less than 2x as fast.
I will use my 2011 smart phone until it physically breaks. If a 1.2GHz device with a 300MHz GPU, 1280x720 screen, and 1GB of RAM can't make calls and do a decent job of browsing the web, that's a problem with today's software engineering, not with the hardware.
And if Google decides to doom my perfectly good device to planned obsolence, fuck them, I will put Ubuntu Touch of Firefox OS on it. The day of disposable mobiles is over, we have alternatives now just like we do on PCs.
These days I just don't see that. Graphics cards seem to improve by 30-50% each generation, and because so many games are tied to consoles now, they often aren't even taking advantage of what's available. With multicore processors and the collapse of the GHZ race, there's no easy selling point as far as speed, and much less visible improvement (now all that useless crap can be offloaded to the second core!) and most consumers will never need more than two cores. Crysis felt like the last gasp of the old, engine-focused type of game that made you think "man, I really should upgrade to play this"... and that was released in 07. Without significant and obvious performance improvements, and software to take advantage, why bother upgrading?
Personally, I think of these hardware market developments with an eye toward interplay with the software market. Historically, software developers had to consider the capabilities of consumer hardware in determining feature scope and user experience. Hardware capabilities served as a restraint on the product, and ignoring them could effectively reduce market size. The effect was two-sided though, with new more demanding software driving consumers to upgrade. Currently, in this model, the hardware stagnation can be interpreted as mutually-reinforcing conditions of software developers not developing to the limit of current hardware to deliver marketable products, and consumers not feeling the need to upgrade. In a sense, the hardware demands of software have stagnated as well.
From this, I wonder if the stagnation is due to a divergence in the difficulty in developing software that can utilize modern computing power in a way that is useful/marketable from that of advancing hardware. Such a divergence can be attributed to a glut of novice programmers that lack experience in large development efforts and the increasing scarcity and exponential demand for experienced developers. Alternatively, the recent increase in the value of design over raw features could inhibit consideration of raw computing power in product innovation. Another explanation could be that changes to the software market brought about by SaaS, indie development, and app store models seem to promote smaller, simpler end-user software products (e.g. web browsers vs office suites).
I wouldn't be surprised if this stagnation is reversed in the future (5+ years from now) from increased software demands. Areas remain for high-powered consumer hardware, including home servers (an area that has been evolving for some time, with untapped potential in media storage, automation and device integration, as well as resolving increasing privacy concerns of consumer SaaS, community mesh networking and resource pooling, etc), virtual reality, and much more sophisticated, intuitive creative products (programming, motion graphics, 3d modeling, video editing, audio composition, all of which I instinctively feel are ripe for disruption).
Today, the calendar says it's time for me to upgrade again. Yet the pain of obsolescence of a five-year-old laptop in 2013 just isn't the same as in 2008: USB 3.0? What new applications is it enabling? Anything I need Thunderbolt for? Not yet. New Intel architectures and SSDs at least promise less waiting in everyday use... but I'm hardly unproductive with my old machine.
1. Consumer affordable monitors. You'll need a better GPU, and probably Display Port. I don't expect most consumers wanting 30" 4K display. They'll want 22-27" displays of 4K resolution, a la Retina. (PPI scaling) Everything is still the same size as people are used to (compared to 1080p), but everything is sharp as Retina.
2. 4K adoption of multimedia on the Internet. The more 4K videos that pop up on YouTube, the more people who are going to want to upgrade their hardware. This one isn't specific to PCs though, it could apply to mobile devices as well.
Go to YouTube and find a 4K video (the quality slider goes to "Original"). Now look at the comments. Many of the comments in 4K videos are people complaining how they can't watch the 4K video because of their crappy computer (and sometimes bandwidth).
The CPU is slow by current standards, but a Core2Duo isn't slower than the low-clock CPUs in many Ultrabooks. The 3 hour battery life could be better, but I can swap batteries and many new laptops can't. The GPU sucks, but I don't play many games anyway. DDR2 is pricey these days, but I already have my 8gb. SATA2 is slower than SATA3, but I'm still regularly amazed at how much faster my SSD is than spinning rust. It's a little heavy, but really, I can lift six pounds with one finger.
So the bad parts aren't so bad, but nothing new matches the good parts. The screen is IPS, matte, 15" and 1600x1200. Aside from huge monster gaming laptops, nothing has a screen this tall (in inches, not pixels) anymore. I can have two normal-width source files or other text content side by side comfortably. The keyboard is the classic Thinkpad keyboard with 7 rows and what many people find to be the best feel on a laptop. The trackpoint has physical buttons, which are missing from the latest generation of Thinkpads. There's an LED in the screen bezel so I can view papers, credit cards and such that I might copy information from in the dark, also missing from the latest Thinkpads.
Interestingly it seems like some would love to run their old OS on them. My Dad sort of crystallized it when he said "I'd like to get a new laptop with a nicer screen but I can't stand the interface in Windows 8 so I'll live with this one." That was pretty amazing to me. Not being able to carry your familiar OS along as a downside. That reminded me of the one set of Win98 install media I had that I kept re-using as I upgraded processors and memory and motherboards. I think I used it on 3 or 4 versions of machines. Then a version of XP I did the same with.
I wonder if there is a market for a BeOS like player now when there wasn't before.
I used to update for gaming and 3d almost entirely.
I also used to update more frequently for processor speed/memory that were major improvements.
If we were getting huge memory advances or processor speeds still there would be more reason to upgrade. Mobile is also somewhat of a reset and doing the same rise now.
---Why does nobody talk about them? Because nobody wants them, thats why. Imagine somebody brings you a personal desktop computer here at South By, theyre like bringing it in on a trolley.
Look, this device is personal. It computes and its totally personal, just for you, and you alone. It doesnt talk to the internet. No sociality. You cant share any of the content with anybody. Because its just for you, its private. Its yours. You can compute with it. Nobody will know! You can process text, and draw stuff, and do your accounts. Its got a spreadsheet. No modem, no broadband, no Cloud, no Facebook, Google, Amazon, no wireless. This is a dream machine. Because its personal and it computes. And it sits on the desk. You personally compute with it. You can even write your own software for it. It faithfully executes all your commands.
So if somebody tried to give you this device, this one I just made the pitch for, a genuinely Personal Computer, its just for you Would you take it?
Even for free?
Would you even bend over and pick it up?
Isnt it basically the cliff house in Walnut Canyon? Isnt it the stone box?
Look, I have my own little stone box here in this canyon! I can grow my own beans and corn. I harvest some prickly pear. Im super advanced here.
I really think Im going to outlive the personal computer. And why not? I outlived the fax machine. I did. I was alive when people thought it was amazing to have a fax machine. Now Im alive, and people think its amazing to still have a fax machine.
Why not the personal computer? Why shouldnt it vanish like the cliff people vanished? Why shouldnt it vanish like Steve Jobs vanished?
Its not that we return to the status quo ante: dont get me wrong. Its not that once we had a nomad life, then we live in high-tech stone dwellings, and we return to chase the bison like we did before.
No: we return into a different kind of nomad life. A kind of Alan Kay world, where computation has vanished into the walls and ceiling, as he said many, many years ago.
Then we look back in nostalgia at the Personal Computer world. Its not that we were forced out of our stone boxes in the canyon. We werent driven away by force. We just mysteriously left. It was like the waning of the moon.
They were too limiting, somehow. They computed, but they just didnt do enough for us. They seemed like a fantastic way forward, but somehow they were actually getting in the way of our experience.
All these machines that tore us away from lived experience, and made us stare into the square screens or hunch over the keyboards, covered with their arcane, petroglyph symbols. Control Dingbat That, backslash R M this. We never really understood that. Not really.---
Yes, PCs aren't ageing as fast as they used to.
But they are obsolete beyond 'not being portable'.
Here is why tablets are winning:
1. Instant on. I can keep my thoughts in tact and act on them immediately. No booting, no memory lags, no millions of tabs open in a browser.
2. Focus. Desktop interfaces seem to be desperate to put everything onto one screen. I have a PC and a Mac (both laptops). I prefer the PC to the Mac; better memory management for photoshop and browsing, and I love Snap. But that's where the usefulness stops. With an ipad, I have no distractions on the screen.
3. Bigger isn't better. That includes screens. Steve Jobs was wrong. The iPad Mini is better than the bigger variants. Hands down. Same goes for desktop screens. I want a big TV, because I'm watching with loads of people. I don't need a big screen for a PC because the resolution isn't better than an ipad and I'm using it solo. Google Glass could quite possibly be the next advancement in this theme.
4. Build quality. PCs look and feel cheap. Including my beloved Sony Vaio Z. The ipad in my hand could never be criticised for build quality.
5. Price. The ipad doesn't do more than 10% of what I need to do. But, I do those 10% of things 90% of the time. So why pay more for a PC when the ipad has no performance issues and takes care of me 90% of the time.
I used to think shoehorning a full desktop OS into a tablet is what I wanted. Seeing Surface, I can happily say I was wrong. I don't want to do the 90% of things I do 10% of the time. That's inefficient and frankly boring. PCs and Macs are boring. Tablets are fun. There's one last point why tablets are winning:
6. Always connected. It strikes me as absurd seeing laptops on the trains with dongles sticking out. It takes ages for those dongles to boot up. I used to spend 5-10 minutes of a train journey waiting for the laptop to be ready. My ipad mini with LTE is ever ready. And cheaper. And built better. And more fun.
The PC isn't dead, but it will have next to no investment going forward, so will suffer a mediocre retirement in homes and offices across the world.
Note: I love my PC. I just love my ipad mini more.
i agree that the increased (functional) life of pcs is a contributing factor to slowing unit sales, but its laughable to attribute it to the idea that people who once would have bought a new pc are now just buying more ram and upgrading internals.
the percentage of people who would have any idea how to do that, or even consider it as a viable option, is far to small to have any real impact on demand..
1. Buy mid range processor with a lot of L2 cache2. Find mobo that supports lots of ram and stuff it to the max.3. SSD is a must4. Buy the second card of the high tier model (the cut chip from the most recent architecture (in their times that were 7950, 570 etc ... but with current branding of NVIDIA a total mess it may require some reading if you are on team green)5. Any slow hard drive will be enough for torrents6. In 2 1/2 years upgrade the video to the same class.in 5 years ... if the market is the same repeat. If it is not - lets hope there are self assembled devices on the market non locked.
I have been doing that since 2004 and never had a slow or expensive machine.
The motherboards for PCs built 5 years ago are completely different from those built today, and the CPU sockets have changed every other year. New processors from Intel will be soldered on.
The performance of a PC from five years ago is probably adequate for web browsing and office tasks. For anything more demanding, the advances in power consumption, execution efficiency and process node are huge leaps from five years ago.
The netbook handled just about everything I threw at it, and with FreeBSD and dwm it ran faster than it did when I first bought it.
Unfortunately I'm not too pleased with the HP Envy 15. The AMD A6 Vision graphics aren't so bad, but support for the Broadcom 4313 wifi card is sparse in the nix world...
Soon I'll be tearing it apart to swap out the bcm 4313 for something supported by FreeBSD, but for now, I'll not be purchasing a new PC any time soon.
My old T400 was "dying" until I put an SSD in it. Blew my mind how significant an upgrade that was. When it started "dying" again I maxed out the RAM @ 16GB.
The CPU is a bit lacking now that I want to run multiple VMs side by side, and the chassis has seen perhaps a bit too much wear, so a replacement is coming -- but I've managed to put it off for years, with relatively inexpensive upgrades.
It's wasteful to be throwing away computers constantly. In the PC world, I've noticed that it's particularly prevalent among "gamers" that are convinced that they need a new computer every couple of years.
This is not end-consumers nor businesses. Enthusiasts who were building and upgrading their computers were always a small market.
The article talks about upgrading repeatedly, but I don't think the author can extrapolate their own expertise over the rest of the traditional desktop users.
the PC market isn't dead, it is slowly receding and it won't stop. It's because of the new alternatives, and assuming finite budge, when you get one of the alternatives, which cost roughly around a consumer-level laptop, you don't have enough for another PC that you don't need.
The article to me seems extremely narrow in both its oversight and scope. People don't care about processing power not because it's a marketing gimmick, but because they don't care. People who do care are the ones who know enough to care, and they will always be minority.
1) I don't need to buy a new PC every two years anymore2) Someone should make a tablet with slots so it can be upgraded like a PC
Personally, I upgrade incrementally, and I still use my PC on a regular basis. The machine I have now is a hodge-podge of parts from different ERAs. I have an Intel Q6600 but DDR3 RAM, and a modern, quite beefy graphics card that I bought when it was in the upper-echelons in early 2013. It runs most modern games pretty well. I have an SSD for most software but also three big HDDs, one of which I've had since my first build in 2004.
I have a desktop with twice the processing speed and twice the ram, but for all intents and purposes, it runs almost exactly the same as the little Acer. Unless I am playing a game or running illustrator, I simply don't need the power.
I think this article gets it about right - I've started enforcing a 3 year cycle for both phone and laptops because they were costing me too much (in a mustachian sort of way) - and I've stuck to it with laptops (I made 3.5 years on a 2009 MBP) and will be doing so with the iPhone (due for replacement spring 2015.) If the nexus devices keep getting cheaper and awesomer, then I might jump to those a bit earlier (particularly if I can sell the 32GB 4S for an appreciable fraction of the new phone cost.)
Working with the 3.5 year old laptop got slightly painful (re-down-grading back to snow leopard from lion was essential, I even tried ubuntu briefly) but perfectly bearable for coding and web browsing. I'll see how slow the phone gets, but I'm quite relaxed about not having the latest and greatest iOS features (I've not seen anything compelling since iOS 5; I only did 6 because some new app requested it.)
 or rather, one was, and then I gradually replaced all the parts until I had a whole spare PC to sell on ebay, and one mobo bundle later and I'm still using it with no problems, playing games etc.
Yes, on paper, the latest processor is faster than the one released two years ago but you have to be doing specific types of workloads with it to really make a big difference.
I had a 2005 imac before acquire this 2011 iMac and in between I've bought MacBooks and Macbook Air. I'm thinking in getting my new desktop on 2015.
Thing is, when I go to my parents house, I see 2003 computers. I think this reality apply's to many families: parents don't care about speed, they get used because their needs are less computational and more casual, like browsing, Facebook and Skype. The trend I'm seeing in Spain is getting iPads for parents is getting notably high. All my friends instead upgrading their parents pc desktops are buying ipads and parents love it. Are you having the same experiences?
If any, what is dead is the software need for the Moore's law
Games can always use more resources. AFAIK there is still a lot of progress being made with GPUs. 60fps on a 4K display will be a good benchmark. The funny thing is that GPU makers have taken to literally just renaming and repackaging their old GPUs, e.g. the R9. As for the game itself, there is a looming revolution in gaming when Carmack (or someone equally genius-y) really figures out how to coordinate multiple cores for gaming.
But yeah, most everything else runs fine on machines from 2006 and on, including most development tasks. That's why Intel in particular has been focused more on efficiency than power.
 Tom's Hardware R9 review: http://www.tomshardware.com/reviews/radeon-r9-280x-r9-270x-r...
 Carmack at QuakeCon talking about functional programming (Haskell!) for games and multi-core issues: https://www.youtube.com/watch?v=1PhArSujR_A&feature=youtu.be...
Tablets, those funky phones are popular today something else will get popular after them. PC may never get as popular as them but they are here to stay.
There was a time when you felt like a new PC was obsolete the second you took it out of the box. But that was because we were just scratching the surface of what we could do with new hardware. We're now at a point where it's hard to find consumer and business applications for all the spare hardware that you can afford.
Mobile adoption has been so quick because everyone is buying devices for the first time (tablets), or there is an incentivized two-year replacement cycle (phones). But I'm still using an original iPad that works just fine, and a 3 year old cell phone with no reason to upgrade. Eventually, I think we'll start to see the same leveling off in mobile as well.
It's really nice when some build process takes less time because of better hardware. Also, try running some upcoming games on an old PC. Obviously the need for some hardware depends on what you are planning to do.
Microsoft and its SharePoint platform will keep SharePoint developers upgrading their desktops upon every release.
For laptops it's a different story. The big push seems to be in reduction of power consumption for longer battery life, which sounds pretty sensible to me. I guess if battery life is a big concern for a PC user, then it makes sense to go to a smaller process. That does seem like a pretty small reason to upgrade, though.
Another good indicator that the PC "game" has changed is that the two major commercial PC OS's just released their latest versions (Mavericks & 8.1) for free.
Saying that the PC is dead is being correct. Almost everyone I know buys a laptop instead of a PC. I know a lot of people that do not have a PC, but I don't think I know a single person that doesn't have a laptop.
It's like saying the Novel is Dead. Plenty of novels are being written, but it is really not the one major form of art that people are discussing. That is being replaced by television and film. Will there be novels written fifty years from now? Most definitely. But still, the idea that the novel is the one true form where the greatest art occurs is over.
Although one could argue that network bandwidth is still an area affects the "everyday stuff".
Haswell architecture couldn't have hit the market at a better time for laptop owners, with more powerful integrated graphics and low power use. I'm sure it isn't a coincidence.
i'm thinking my parents - they will use that 2000 pc until it's not booting up, and then they'll worry on upgrade
Why? What is the benefit of doing so when everyone wants a deterministic build?
... as long as you also trust the compiler not to introduce any backdoor... (cf. Reflections on Trusting Trust)
Import the .asc file in the keyring (File > Import certificates). Now you should mark the key as trusted: right click on the TrueCrypt Foundation public key in the list under Imported Certificate tab > Change Owner Trust, and set it as I believe checks are casual. You should also generate your own key pair to sign this key in order to show you really trust it and get a nice confirmation when verifying the binary.
- Starts very light, bare bones, downloads torrents and that's all
- Gets bloated with more and more features that nobody wants
- Partners with a shady company
Off to alternatives I go.
The day uTorrent pushed the update that tried to install a browser extension I was absolutely done with them. I do not support malware in any shape or form.
1.6.1 is light weight, unmolested, and still worth using.
runs fast, no ads, no issues, just works!
As for uTorrent, it's been going down this path for a while, gradually introducing crap into the app. And this one is the last for me, as well.
Btw, apparently, they turned off registration on the forum to ward off the mounting complains. When I go to https://forum.utorrent.com/register.php, I'm greeted with Get lost spammer, we don't need your kind here. And of course the topic is closed. Well done.
Let's be clear here: the user was still given a choice, but the user "trusted" uTorrent to not force them to make one. Give me a break.
It was a beautiful bit of software.
I've also used Deluge, but there's nothing too special about it in my eyes.
I stopped using it about 2 yrs ago for similar reasons. It's a malware seeding garbage now.
The problem with NIST Dual_EC_DRBG is simpler than the article makes it sounds. A good mental model for Dual_EC is that it's a CSRPNG specification with a public key baked into it (in this case, an ECC public key) --- but no private key. The "backdoor" in Dual_EC is the notion that NSA --- err, Clyde Frog --- who is confirmed to have generated Dual_EC, holds the private key and can reconstruct the internal state of the CSPRNG using it. I think this problem is simple enough that we may do a crypto challenge on a toy model of Dual_EC.
Nobody in the real world really uses Dual_EC, but that may not always have been historically true; the circumstantial evidence about it is damning.
The NIST ECC specifications are in general now totally discredited. If you want to see where the state of the art is on ECC, check out http://safecurves.cr.yp.to/.
You should never, ever, never, nevern, nervenvarn build your own production ECC code. ECC is particularly tricky to get right. But if you want to play with the concepts, a great place to start is the Explicit Formulas Database at http://www.hyperelliptic.org/EFD/; the fast routines for point multiplication are mercifully complicated, so copying them from the EFD is a fine way to start, instead of working them out from first principles.
Doing 2048 bit private rsa's for 10s: 1266 2048 bit private RSA's in 9.98s Doing 256 bit sign ecdsa's for 10s: 22544 256 bit ECDSA signs in 9.97s Doing 2048 bit public rsa's for 10s: 42332 2048 bit public RSA's in 9.98s Doing 256 bit verify ecdsa's for 10s: 4751 256 bit ECDSA verify in 9.92s
Not a cryptography expert here, I don't know how to respond to these.
f : x -> pow(x, pubkey) mod m g : x -> pow(x, privkey) mod m
The "big breakthrough" result was actually proven by Euler hundreds of years ago!  The innovation of RSA was building a working public-key cryptosystem around Euler's result, not the result itself.
The problem with most lock screen enhancements is that anything you put there is outside your phone security "firewall" and available to anybody who picks up your phone. The 4.2 lock screen widgets work fairly well with this (eg: you can open the camera app without unlocking the phone, but attempting to swipe over the gallery forces you to unlock). However they are (I assume) using the core framework APIs to do that and I presume support for it is coded into the apps, while this seems to be doing it for any app.
How do I know you're not sending my usage patterns upstream to CoverCorp? How do I know that you're not reading the Android Music Provider database, and sharing my data back?
I hate the idea it needs all sorts of server connections for their business model. I don't know a way around that, but if they or another company figure out how, that's what people will gravitate toward. Especially given the paranoid climate.
This makes me want an Android. Great job, guys!
I've never liked Android's implementation of home/app screens (widgets + some apps, tap to reveal all your apps).
I guess if you want a lot of clocks, Android is great.
This adds another app/button layer...
How well does it work with some kind of lock-screen security? The UX for that is always a hassle, and I'd love to find someone who is doing it well.
It would be good to be able to define actions based on location (either by which wifi I connect to or GPS) - as well as time of day.
(I'd like to have my screen auto dim at 10PM)
You don't prepay?
Furthermore, advertising income often provides an incentive to provide poor quality search results. For example, we noticed a major search engine would not return a large airline's homepage when the airline's name was given as a query. It so happened that the airline had placed an expensive ad, linked to the query that was its name. A better search engine would not have required this ad, and possibly resulted in the loss of the revenue from the airline to the search engine. In general, it could be argued from the consumer point of view that the better the search engine is, the fewer advertisements will be needed for the consumer to find what they want. This of course erodes the advertising supported business model of the existing search engines. However, there will always be money from advertisers who want a customer to switch products, or have something that is genuinely new. But we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.
The main difference seems to be that today even getting the top organic search result doesn't provide enough clicks for advertisers, so they feel obliged to purchase ads for their own brand names even when they already rank first. If people searching for Southwest Airlines on Google aren't ending up on the Southwest Airlines website without a huge great banner ad (despite it being ranked at the top of the results) then something is going badly wrong on the Google search results page.
I think this is a pretty disingenuous analysis of what's going on. It's obvious from the comparison to the [Virgin America] search that this is a bigger change that just adding a "banner ad".
Notice that for [Virgin America] there are _two_ spots that bring you to virginamerical.com, the ad and the first organic result. This is redundant, wastes space, and probably is confusing to some users. I don't know why a company buys ads for navigational queries where it's already the top result, but they do, and I'd argue it's bad for users.
On the [Southwest Airlines] query you can see that there's no redundant ad anymore - the navigational ad and the first organic result are combined. Calling that whole box and ad, when it contains the same content that the former top organic result used to, is misleading, but makes for a much more sensational headline when you want to claim that most of the screen is ads.
I'm not sure about the experiment, that's not my area, but my guess that this is part of an attempt to not have this ad+organic confusion for navigational queries by allowing the owner of the first result of a nav query to merge the ad with the result into a professional and official looking box. Maybe that'll work, maybe not, which is most likely why it's an experiment.
In fact spending money on their own brand keywords generated signifigant negative ROI (1).
So my guess is that this strategy from Google is designed to provide brands with a first step to generating actual value from Google search results.
I can see brands making these out-sized spends when able to provide their customers w/additional value like interactivity within the goog results, etc.
This sounds to me like a complete non-issue. If you don't like ads, install AdBlock. Of course if you need clicks for your website, carry on.
Same story (but no real discussion) was submitted here:
Call me cynical, but I suspect it will still be upvoted and discussed here because any comments on that earlier discussion will get lost in the noise of the close to 200 comments already there.
Important boss man also wants to get good results for 'acme blue widgets', 'tough widgets Alabama', 'naughty widgets' and whatever but only really cares about those secondary searches when someone else has told him to care about it. It is the main company name, in the search box that matters.
I think this is going to work well for all concerned and I don't share the cynicism most people seem to have about this.
First, probe the outrage machine for banners for particular brands. Then for a huge price tag, add lightweight widgets to the SERP for brands so searchers can e.g. buy tickets from the Google Search page. This is hailed by the brands as increasing sales dramatically. Demand for this feature grows.
Once significant numbers are using the SERP widgets, make the banners/widgets part of general non-brand search. Natural next step. A little bit of outrage, but at this point it just gets muffled by the masses. Life goes on.
All of these brands are getting increasingly dependent on Google's SERP widgets, which give Google huge leverage power. One deal leads to another and before you know it Google starts buying up airlnes to streamline everything.
So in 2030 we're flying Google Air using a Google phone to buy tickets to the Google Movies, to see a film made by a studio wholly owned by Google.
I'm not even saying this is a Bad Thing (tm). Just that if I were heading Google this would totally be my game plan.
1. Users tend to ignore the small ads on the right (anecdote: I do)
2. Users do notice and click on search results beneath the top query, even when they originally intended to arrive at their exact branded query
3. Search results beneath the top result are for competitors
Solution: Put in huge "ad" to draw attention and also to knock competitors listings to the very bottom of the screen or off the fold completely
If 1-3 hold true, then I could see it making sense competitively to shove those other results down the page.
Edit: aresant pointed out a good article that could explain the intent. Yay! Also, it wasn't my intent to hate on Google, just a thought experiment.
Ignoring that, it's unfair to use one example and say that search results are 12%. Is it 12% average, 12% median, or 12% for navigational queries only?
I just did a few searches for educational topics, got no ads. ... I would say there isn't a problem...
So long as Google only returns these sponsored ads for searches for the company name, I don't see this as being a problem at all, given the fact that many users are using the address bar integrated search in place of bookmarking or typing URLs.
Where this would become a problem is if they start expanding this to searches beyond simply the company name, and I think there is a bit of a gray area there. As someone else pointed out in this thread, showing the Southwest banner in response to a search for "cheap airfare" pretty unambiguously crosses a line, but what abut something like "book southwest airlines flights." One could argue that the user was attempting to get to the southwest airlines website to book a flight, so showing the Southwest banner would be appropriate, however, companies like Expedia, Kayak, and so on, whose links would now be much further down the page, would likely disagree.
Speaking of "high quality ads": The second Cheap-O-Air Ad is for flights to Southwest not on Southwest Airlines - Deceptive IMHO.
There will be no banner ads on the Google homepage or web search results pages. There will not be crazy, flashy, graphical doodads flying and popping up all over the Google site. Ever.
As a someone who works in advertising, even I dislike banner ads. They are obtrusive, annoying and take away the attention. Google should go back to adwords and make them better rather than anything else.
This ain't a big deal actually, it's a test to get more from their Adwords when people really search for the companies. But behold the future :( (investors, stocks, it will never be enough).
Google are so big and powerful that it's easy and tempting to think of them as invulnerable and immortal, but remember... people have thought that about many companies in the past, more than a few of whom are no longer with us.
Edit: OK, IF this really is only for brand names and doesn't show up for more general searches ("cheap airline tickets", etc.) then maybe it won't be received so badly. That said, I still believe that, in general, "big honkin' banner ads" are NOT going to be well received on Google search result pages. I guess time will tell.
Google is smearing the smartphone market, at the expense of Apple's cash engine, Microsoft is smearing the Search market at the expense of Google's cash engine and Linux is smearing the operating system market at the expense of Microsoft's cash engine. Seems like there is a lot of pressure to diversify.
- 1st quarter of last year: $16.01 billion
- 2nd quarter of last year: $21.46 billion
- 3rd quarter of last year: $20.49 billion
- 4th quarter of last year: $19.90 billion
- 1st quarter of this year: $18.53 billion (the "record" one)
MSFT is both a tech company and a utility.
It has growth potential (phones, surface, search, xbox) but it is also completely essential for global business (servers, AD, SQL Server, Exchange, Sharepoint).
In that sense it is a utility. If you took out all the MSFT software in the world everything basically stops. Your electricity probably doesn't work, you probably can't get on a train to get to work and if you manage to get to work you can't login to anything.
People say "but my company has BYOD!" that might be true, but MSFT is still the infrastructure it is running on. You can bring your AAPL car but you're still driving on an MSFT road.
Critics of Microsoft are wrong to call it's enterprise business a dinosaur. There is no reason to think Microsoft won't continue to grow this business for decades to come.
But I would like to be able to own this as a pure play, not mixed up with XBOX. Let's call this company "Azure" and spin it off, like HP did with Agilent (which should have been called HP), and let the "devices and services" part screw around with reinventing itself.
RIM was a one trick pony. Microsoft has several billion dollar businesses.
For comparison there is the trend for 'iphone' and 'android'.
Sure Microsoft are doing loads of exciting things but people aren't typing 'Microsoft' or 'Windows' into the search engine box of Google as much as they used to. Make of that what you will.
1. Accelerate Ballmer booting out process. Why's he still there?
2. Boost Cloud.
3. Boost enterprise services and everythings.
4. Stop wasting resources on stupid consumer widgets department.
Also known as the Wile E. Coyote Syndrome.
TRWTF is that the author knows someone who reimplemented NodeJS in house.
Most testing services are horribly over priced and you don't get that level of control. Maybe you want to reconsider that one?
I'd sooner spend engineering time writing our own report generators than writing code to push data to another service, in the format they expect, pre-empting which data we might want to have in that service down the track, understanding exactly how the different reports are calculated, and then inevitably having to write a few custom ones of our own as well.
Of course, the exception to this are services have a huge set of useful features and take basically no time to fully integrate with, like Google Analytics.
When you support a SaaS service, assuming the product is good, you're essentially paying for an entire team to work on a problem you don't like that much but need, but which they love. You're also paying for the future of that product. Unless the software is really bad, or you need something so specialised that a current solution is out of the question, how does an engineer not understand this?
You only have limited time in the world. Work on something you find meaningful.
I have nothing against paying for software that someone else wrote, but I'd feel much more comfortable buying the software and running it on my own servers (with a contract that says that if the vendor goes out of business, I retain the right to use the software).
GitHub is a bit part of it. At a lot of shops you have time to drive to the next town and back to update the Wiki or close out a trouble ticket. Uncompromising speed is a feature that turns your developers into winners.
In heading off some of the anticipated snark, this is very much a program with the goal of understanding basic science and developing tools to better understand how our brains work.
Currently, this is only available with invasive brain surgery that can often have complications. So money spent on better imaging and implant technologies will have a strong positive impact on the field.
Interestingly, the researchers I know in this area are confused about why such a big deal is being made about this "Brain Initiative" because the amount ($70M) is actually not a lot given how capital intensive this type of research is and how many labs it will be spread amongst. Still, any funding is better than no funding.
70 mn. over five years is (naively) 14 mn. per year -- which is 0.05% of DARPA's budget.
Google glass is not a moonshot. Google brain implant will be - just think about the SAT scores and all your collect test scores difference for those with brain implant and those without.
But seriously, this research can enable us to understand brain better and help a lot of people.
Is there an equivalent of a tin foil hat that is available for subdermal implantation?
Kidding aside, military research has resulted in some of the most amazing stuff these days... like the entire space program and velcro.
I've got 4 sites running wordpress: my portfolio, my online store, a design database for items under $50, and a magazine cutout marketplace. (See my profile for links) Those last two I mentioned took less than a week to tweak and hack together thanks to the speed and ease of setting up wordpress sites.
It's sad that Matt Mullenweg never got the same recognition that Jack Dorsey or Mark Zuckerberg got. He definitely deserves it. We've got to stop only celebrating and worshiping people who make money. I think Matt empowered people just as much if not more.
The problem then, is that that general functions have no (essential) bandlimit . Remember that differentiation acts as a multiplication by a monomial, in the frequency domain . Non-constant polynomials always eventually blow up away from 0, so in differentiating, you're multiplying a function by something that blows up, in the frequency domain. This means that, in the result, higher frequencies are going to dominate over lower frequencies, at a polynomial rate.
Let me be clear, the problem with numerical differentiation is not just that rounding errors accumulate, it's that differentiation is fundamentally unstable, and not something you want to apply to real-world data.
It depends very much on what your application is, however, I think generally a better approach to AD is to redefine your differentiation, by composing it with a low-pass filter. If designed properly, your low-pass filter will 'go to zero' faster (in the frequency-domain) than any polynomial, thus making this new operator bounded, and hence numerically more stable. It's not a panacea, but it begins to address the fundamental problem.
One example of such a filter is Gamma(n+1, n x^2)/Factorial[n], where Gamma is the incomplete gamma function .
To see why this is a nice choice, notice item 2 in . This filter is simply the product of exp(- x^2) (the Gaussian) multiplied by the first n-terms of the Taylor series of exp(+ x^2), (1/ the-Gaussian). Since this series converges unconditionally everywhere, as n-> +infinity, this filter converges to 1 for a fixed x (as you increase n), however, since it's still a gaussian times a polynomial, it always converges to 0 as you increase x, but fix n.
This is my area of research, so if anyone's interested I can give more details.
 https://en.wikipedia.org/wiki/Band-limit https://en.wikipedia.org/wiki/Fourier_transform#Analysis_of_... https://en.wikipedia.org/wiki/Incomplete_gamma_function https://en.wikipedia.org/wiki/Incomplete_gamma_function#Spec...
For any function that is not a combination of polynomials, you need to have its Taylor expansion up to the desired order of derivatives, so you can't just take an "arbitrary" function and use this method to compute its derivative in exact arithmetic.
So for anything other than polynomials, you just reword the problem of finding exact derivatives to finding exact Taylor series, and in order to find Taylor series in most cases, you have to differentiate or express your function in terms of the Taylor series of known functions.
Edit: Indeed, take the only non-polynomial example here, a rational function (division by a polynomial). In order to make this work, you have to know the geometric series expansion of 1/(1-x). For each function that you want to differentiate this way, you have to keep adding more such pre-computed Taylor expansions.
They're efficient enough for first-order derivatives. For example they are used in Ceres, Google's library for non-linear least-squares optimization
All values tried so far agree with Wolfram Alpha, so color me surprised and happy for learning something new.
This really does fall in the ream of algebraic geometry since this method only works for rational functions - as he implemented it.
To numerically compute sin(x + ) you need the Taylor series.
Encoding power series as matrices is sometimes convenient for theoretical analysis (or, as here, educational purposes), but it's not very efficient. The space and time complexities with matrices are O(n^2) and O(n^3), versus O(n) and O(n^2) (or even O(n log n) using FFT) using the straightforward polynomial representation (in which working with hundreds of thousands of derivatives is feasible). In fact some of my current research focuses on doing this efficiently with huge-precision numbers, and with transcendental functions involved.
The number a+bd can be encoded as...
The number a+b*epsilon can be encoded as...
Radiation levels due to nuclear testing were elevated 7% over normal
The major source of radioactivity in steel is cobalt 60, which has a half life of 5.27 years
In which case one could just wait a year and the radioactivity of your steel would drop by 7%, making up for the effects of nuclear testing contamination. Put another way, steel from 1944 has been around for some 10 half-lifes of cobalt-60, meaning it has 1/2^10th as much 60Co radiation as when it was made. Why would it matter if the radioactivity was 1/2^10th or 1.07/2^10th as much as the background radiation?
I'm sure there are other isotopes which make this more of a problem, but the facts as presented in this article don't make much sense.
The article says background radiation levels peaked at .15 mSv in 1963. Looking at the wikipedia page on Seiverts, I am trying to compare this to other radiation examples, but not sure how to draw a comparison.
Would a human standing outside be receiving .15 mSv per hour? year? total?
I've half a mind to write a bot that submits a random article every 48 hours.
Then again, cartopy is only a year or two old, so it doesn't have the traction that basemap does. It's gained a fairly large following very quickly, though.
Hofstadter should be COLLABORATING with all those other researchers who are working with statistical methods, emulating biology, and/or pursuing other approaches! He should be looking at approaches like Geoff Hinton's deep belief networks and brain-inspired systems like Jeff Hawkins's NuPIC, and comparing and contrasting them with his own theories and findings! The converse is true too: all those other researchers should be finding ways to collaborate with Hofstadter. It could very well be that a NEW SYNTHESIS of all these different approaches will be necessary for us to understand how complex, multi-layered models consisting of a very large number of 'mindless' components ultimately produce what we call "intelligence."
All these different approaches to research are -- or at least should be -- complementary.
There have been attempts to understand intelligence with intelligence (logic, symbols, reasoning etc.) for 30 years, to not much effect, now AI and machine learning are advancing quite steadily, so why the snark? All evidence suggests that the way the brain itself learns things is statistical and probabilistic in nature. There are also new disciplines now, like Probabilistic Graphical Models, which are free of some of the traditional downsides of purely statistical methods, in that they can be interpreted and that human-understandable knowledge can be extracted from them. This is something that really seems promising, and to some extent is an union of the old and new approaches, despite the claims of a big division, but it is hard to see much premise in purely symbolic methods invented merely by some guy somewhere thinking very hard.
I for one am very happy that people seek inspiration in the way human brain works, that's what science is, if you just come up with things without consulting the real world it's not science, it's philosophy, the one discipline that has yet to produce a single result.
This comparison between complementary approaches is an apt analogy for most fields, where the focus shifts every once in a while, when one of the approaches largely hits a wall and most people switch to the other one. A while later, the trends will almost inevitably reverse and draw inspiration from other approaches. The unfortunate thing is that there's no dialogue between the two camps, which makes it that much harder to port good ideas from one context to the other.
I could provide examples from physics research, or for that matter, trends in static-vs-dynamic blogs :P Also, the more "applied" the field, the shorter these cycles are.
I find the analogy to Einstein at the end of article especially funny. I think it's much more likely that people will look upon current defenders of "good old fashioned AI" like they now do upon people who still looked for ether after Einstein's discoveries.
DH is the most well known guy of a small, stubborn group of AI developers who still believe that "human thought" can be reasoned about and can be understood in isolation, and that we can build intelligence without simply reducing it to statistics or to brain anatomy.
I applaud his efforts, and find some of the programs he's written both creative and refreshing.
Hofstadters lecture about analogy on youtube http://www.youtube.com/watch?v=n8m7lFQ3njk
Also some earlier work on the subject
I have also written a review on this very interesting book "Surfaces and Essences: Analogy as the Fuel and Fire of Thinking"
I cannot recommend "Creative Analogies" more. I have purchased no less than four copies (two for myself; two for others, including K. Barry Sharpless, who once made a remark about AI that was reminiscent of some of the ideas in CA) over the years. It's even better than "Surfaces".
When I was in college (and GOFAI was still alive) GOFAI researchers themselves portrayed him as very much an outsider.
If anything, the human mind seems to me to be a particular algorithm that is flexible, but trades that flexibility for capability in certain problem areas. Using a transportation metaphor, it's like walking versus air travel. Walking is incredibly flexible when it comes to where you can go, but air travel is by far the optimal route to get from coast-to-coast, although you are limited to travelling between airstrips. I feel like focusing on the human brain as the "true" intelligence is like claiming that walking is the only true transportation, instead of focusing on optimal routes for each problem.
I was under the impression that Wilhelm Wundt was the father of psychology.
I don't see any sign that anyone is selling stock... these numbers are all consistent with 13% dilution due to Twitter issuing new stock.
Can anyone comment on that or shed some light? As a potential investor, those factors make me shy away from these investments as it makes the stock more volatile to changes and puts the fate of the stock in a few large holders hands.
But with all that data and all those users and celebrities there is so much possibility and directions it can go. If I owned them the front page would be all entertainment news, hire bloggers to scan through tweets and write stories about the latest gossip. Why let tmz and other entertainment news get all the ad revenue by writing about the info coming directly out of your site, twitter should be the #1 webpage for celebrity news/gossip. The kind of people who really use twitter are obsessed with this garbage, it seems only natural.
Companies offer stock as an investment opportunity to raise capitol. Right?
I am thinking of buying a very small number of shares & "gifting" them to my sister - a super user of their service..more for novelty than as part of any serious investment strategy.
From the outside looking in I'd rather fill that role with someone with car industry experience bringing actual cars to market, because battery life and industrial design are somewhat fungible, but if Tesla is late on bringing car models to market that has a serious effect on their timelines.
"I want it to work like A."code it up, find twenty edge case potentials"Well, 3, 5, 6, and 7 can be solved like this, let's guard to make sure 1, 3, 8, 9 never happen, 10-13 can never happen, and let me get back to you on the rest."code it up, notice a few more edge cases.Lather, rinse, repeat.
Along the way, one of the edge cases will show me that what they really want is B, and time and budget let me know if we start down that path.
I mean I get what's going on here, but the syntax clutter drives me nuts.
[["if", "eq", "decline"], "Pending Response"]
My experience with workflow engines (as a developer) has been ... not great. They are hard to debug, and difficult to code once the logic gets complicated.
I've signed up to your mailing list anyhow. (Khuram Malik)
Look forward to chatting to you.
Ctrl-r = reverse history search. Type a partial command after Ctrl-r and it'll find the most recent executed command with that substring in it.
Press Ctrl-r again, jump to the next oldest command containing your substring. Did you accidentally press too many Ctrl-r? Press backspace to move forward in history.
[alias] br = branch co = checkout ci = commit -v sci = svn dcommit --interactive cp = cherry-pick l = log --pretty=oneline --abbrev-commit l3 = log --pretty=oneline --abbrev-commit -n3 lm = log --pretty=oneline --abbrev-commit master.. rc = rebase --continue st = status -sb squash = !git rebase -i --autosquash $(git merge-base master HEAD)
if [ -f ~/.functions ]; then . ~/.functions; fi
A tweak to the "editbash" suggested alias will make it so that you don't have to reopen your terminal. My equivalent alias is "vif", for "vi .functions":
alias vif='vi ~/.functions; . ~/.functions'
Lastly: brevity is king. I love 'alias psgrep="ps aux | grep"', since I use it several times a day, but to "level up your shell game", keep it short. My alias for this command is "psg". The other alias that I use all the time is "d" -- "alias d='ls -lFh --color --group-directories -v'".
What do you mean you've never used Emacs? mumble whippersnappers mumble
(pt2/pt3 borked on IBMs site)
Alternatively, you can just do '. ~/.bash_profile' or 'source ~./bash_profile'.
...after aliasing git to 'g' in your shell config, of course :)
Tilde-hyphen expands to the previous directory you were in, and of course "cd -" returns you to your previous directory, so I put them together all the time.
Here's an example workflow (with a fake PS1):
mac:/Users/me/Projects/my_new_app$ cd ~/.pow mac:/Users/me/.pow$ ln -s ~- . mac:/Users/me/.pow$ cd - mac:/Users/me/Projects/my_new_app$
That's bit of a contrived example above. Here's a more realistic way to do a symlink for pow:
mac:/Users/me/Projects/my_new_app$ ln -s `pwd` ~/.pow/
> ctrl + left arrow Moves the cursor to the left by one word> ctrl + right arrow Moves the cursor to the right by one word
Alt + b and Alt + f are also aliases for the same action.
I also enjoyed the format of the article. A whole dev team each contributing their own piece to a blog post provides a lot of different voices and styles in a concise way.
It's a great project anyway and it did take 2-3 weeks to decide between the two. Good luck with the project!
So the marketing company or the sales dept turns up the juice a little bit. They allow them to stand out a little more. They give them special bells and whistles the user created content doesn't have. They let them animate, float over content or auto play video or sound.
Why? Because it makes the product a little better for the customers: the advertisers. Instagram, just like their Facebook overlords, are now in the business of making their product as good as it can be for the advertisers while keeping the users just happy enough not to leave.
I really dislike this cycle in the startup world. I don't know a good way around it. I'd like to see someone disrupt that.
I assume either a) it's prohibitively expensive to implement such an ability based on how much revenue they expect vs. lost ad revenue vs. lower user engagement due to ads lessening the user experience, b) people are less willing to pay to put ads on your service at a given price if there is a segment of your user base that is guaranteed not to see them.
I'm sure big, sexy brands will have no problem creating some nice-looking visuals that will fit right in. Questions is for me whether this will ever work for the long tail of advertising that is currently tweaking their keywords on AdWords. Anybody have any insight or experience with this?
 Just one of them: http://www.digitalks.me/social-media-marketing/instagram/how...
There are a number of startups including Nitrogram, Olapic and Pixlee who seem to be doing well selling analytics, user-generated content, customer service tools and branded contests based on Instagram. Those business models seem like a much better fit for the platform.
Sad to see Instagram take the lazy route.
See Mailchimp, free up to 2k subscribers, paid after that. It's so simple.