"Something you have, something you know, something you are".
It's great work, I hope the fact that you can make a copy from a simple HD photo will bury people's ideas about fingerprints security for good.
Also notised there were some questions about neck pains, If I adjust the nose-pad to the right level and angle it right it forces me to sit up straight. However, in the video my back is arched quite a lot because I was sitting at a table that was too low (but it was only a temporary setup).
The second time, I was in a different country and was sent to a surgeon. He had no clue what RSI was and said that I should not worry until it gets worse and ready for Carpal Tunnel surgery. I said "no thanks" and found a sports therapist instead who was ready to help.
The main lesson here is that "see a doctor" does not always work. I am sure if I saw surgeon the first time, I'd be having some stitches on my wrists right now. Back then, RSI was not well known thing at all. Hopefully, it is more easily recognized now.
I recently suffered some amount of tendonitis in my wrist and it prompted me to make quite a few changes:
* Better posture
* Better seat adjustment
* A nicer (mechanical) keyboard
* Practising touch typing more (i.e. correcting myself any time I use the wrong finger)
* Resting my wrists evenly
I made all these changes simultaneously so I don't know what changed it or if it was a one-time thing and resting fixed it. An interesting thing is that I found that I overuse my right hand.
`vim` binds simple movements to hjkl and that's fine because they're on the home row, but it also means that a lot of the time I'm holding down one key while reading code. I've switched to moving around code better now, using larger jumps, and when scrolling a lot I use my left hand. I've also rebound some other things so that they're easier with home row keys. Anyway, learning to be faster at all this took very little time. I am very impressed with how fast we learn new acts if they keep repeating them.
This is not something about her capabilities, but about the limitation of current input devices regarding our hands.
This is the proof that using a touch pad with your nose is not worse that using it with your hand. There is something wrong in this: try using any real-world interface with your hand, how the shapes, the stiffness, the flexibility of any handle, pen, button, spring you interact with give you some kind of information and let you operate with a superior kind of consciousness.
It seems that it's mostly people who grip their mouse really tightly/type with tense fingers that experience the most problems - I remember when I first started typing, my fingers tired too easily because the keys were heavy, and I was exerting a lot of force trying to get the fingers to exactly where I wanted them to go. Later, when I got a "looser" keyboard and discovered that I didn't really need to hit the keys exactly in the middle but whatever could actuate them worked, my speed more than doubled and I could type for hours without feeling tired at all. The relaxation really helps. Same with mousing - if you find that you have to grip your mouse tightly to make precise movements, turn down the DPI and try lubricating it so it requires as little effort as possible to move. Personally, I don't really like using trackpads because of that friction.
When I started to get some pain in my wrists, I noticed it most when I was using photoshop and clicking a great deal. I tore apart an old USB mouse and wired a pair of foot switches in so I could click with my feet as well. It helped quite a bit. Now I rather prefer it, especially the right click action.
He does not use special devices, just big mouse and large AT/IBM keyboard.
You can check a story very similar to mine:http://www.pgbovine.net/back-pain-guest-article.htm
Disclaimer: I am CS PhD student in a top tier US school. Suffered from RSI, tried everything. Was cured by Dr. Sarno's technique.
Also, good for her, finding a solution that works well for her, strange though it seems at first blush. I like her work, too.
Check out the Imak SmartGlove with Thumb, I can't use a computer without it. Well, of course I can but it is much less comfortable. The glove plus a Kensington Expert Mouse (which is actually a trackball with a scroll ring) plus a good chair (eg Steelcase Leap) will help a LOT.
"The Association of Mouth and Foot Painting Artists of the World"
"VDMFK supports and promotes artists who, due to disability or disease, cannot create their works of art with their hands, but have to use their mouths or their feet."
Thanks for the inspiration Michelle
If she is still doing thous 11-15 hour work days and her neck doesn't get tiered, I really think she might have had too hard of a grip on her mouse or just a bad mouse in general.
And Control VR uses gloves but might prove to be very accurate: http://controlvr.com
The VR headsets are driving the development of this technology.
I sometimes feel pain in my pinky fingers when typing, forcing me to adapt to using only my other fingers.
Just kidding. Glad to see such resourcefulness.
To see a proper implementation, which allows someone to work at a high level, is awesome.
Seriously though, that is awesome.
This was a hardware and software project and I was doing it all. This meant lots of precise motion at times. Running Solidworks or Altium Designer often meant very accurate tiny movement while pressing down on a button. Horrible stuff for your wrist.
I had been exposed to just how bad this could get. I was friends with several people who did visual effects for motion pictures. Same kind of work. They ran 3D workstations for a dozen or more hours per day, every day. One fellow had to have surgery on both wrists due to the damage he caused. His was always in pain after that.
I decided I had to deal with the situation. I didn't want to end-up like that.
First decision was that mice and touch pads where horrible input devices. I tested everything and concluded that low friction thumb-operated trackballs were the best.
Beyond that, the relative angle of the hand to the forearm seemed to have a HUGE effect on causing inflammation, pain and injury. The flatter and more relaxed,the better. In fact, the most relaxed position had my hands drooping over the keyboard and trackball with virtually no tension on the upper tendons. This meant my standard desk had to go.
What I needed was a desk with a cavity into which my hands would droop and meet the keyboard or trackball. My forearms had to be fully supported in order to remove pressure from shoulders and posture.
I welded together a few iterations of the idea and ended-up with a desk that was just fantastic. I could work on this thing for 16 to 18 hours a day and have no wrist burn whatsoever. Of course, I also implemented regimented breaks and exercises, but the desk, as well as switching to a trackball, made the most difference.
I can't help but think this girl did herself huge damage by using the touch-pad for long hours. I particularly dislike touch-pads on laptops (of any make and model) the are in the wrong place and add tension to your tendons precisely where you don't want it.
As for Michelle, wow, what an amazing person she must be.
It works better than you'd think. The PS4 trackpad isn't exactly brilliant, but I can move around the mouse and click on what I want to with some accuracy. Of course, the trackpad on the controller is very small and not very accurate, so it's not really practical for artwork or anything. But, with a better, larger trackpad, I can imagine this technique actually working. I might give it a shot at some point.
I am a bit worried about the inevitable neck and nose pain, though. I wish she had gone into a little more detail about how she avoids that. Maybe she just has a neck of steel?
For the curious, these are some other resources I've found about people working around RSI. Most of these are about using Dragon NaturallySpeaking to code by voice, since that's what I'm most interested in, but I think it's still interesting.
There really needs to be a list somewhere for open-source workarounds to disabilities. To the best of my knowledge, there really isn't one.
Natlink + Dragon NaturallySpeaking:
(NatLink, which lets you make custom speech commands for Dragon in Python, is currently being developed at http://qh.antenna.nl/unimacro/index.html, but that site's pretty incomprehensible. The original author's site is at http://www.synapseadaptive.com/joel/welcomeapage.htm. It's pretty out of date, but explains the fundamentals of the system better, I think.)
https://www.youtube.com/watch?v=8SkdfdXWYaI don't bother looking around, the source code of this was never released)
Libraries for using Dragon NaturallySpeaking on Linux with VMs:
It seems a post about this kind of thing pops up about every other month or so. I'm thinking of showing off my system here when I polish it up a bit. It's not nearly as complicated as some of these other ones, but I'm beginning to get pretty close to normal typing speed coding by voice.
This article seems like a gimmick by someone who wants to be special instead of using perfectly effective solutions. Or it is a joke.
FYI to run Chrome in remote debugging mode on a Mac
> /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --remote-debugging-port=9222
GitHub repo if anyone is interested (With install instructions): https://github.com/auchenberg/chrome-devtools-app
In general, accepting some kind of discomfort helps the body to accept it and deal with it. I discovered this a few years ago when walking through biting cold and deciding to EXPECT the sensation and accept it. My body kicked up the temperature production, and I felt a bit like that guy who can regulate his body temperature naked in the snow. Although of course less genetically adapted than he is :)
- Subroutines - Not a free lunch - Libraries - Not a free lunch - Client-Server - Not a free lunch - OO - Not a free lunch - Multiprocessing - Not a free lunch
Improved tooling can bring both down, but not by that much. That's why you will not see many micro services that are truly micro. You won't see (I hope) a 'printf' micro service, or not even a 'ICU' micro service. A regex service might make sense, if the 'string' it searches in is large and implicit in the call (say a service that searches Wikipedia), but by that time, it starts to look like a Database query. Is that still micro?
In the big app 1) a single syntax error breaks everything. 2) simply loading all the app takes a huge part of ram, and tests are very slow. 3) there is a gigantic dependency tree, as when you depend on a module you also depend on every ones of the module dependencies. 4) Almost no one knows everything about the app. 5) it is impossible to split the company into separate services without getting fights about shared code and architecture decision.
Edit: Since this question is getting many replies, I'll be a bit more specific in what I am looking for. Is there a tool that would let me describe the infrastructure and deploy it on a given cloud provider but also have the ability to deploy the same infrastructure on my local machine (using VMs/docker) for development purposes.
This doesn't seem like an entirely fair comparison:
"It seems to me that all three of these options are sub-optimal as opposed to writing the piece of code once and making it available throughout the monolithic application."
There's a different scenario that could have played out with how to share a library between different services. You could have written the bulk of your application in the same language, like a monolithic application but split into several services. In that case you could create a library for your tax calculations and use it freely within your services.
For me, I split my application into a small number of services and, as much as possible, split things out into libraries to make reuse simple (more libraries, thiner applications).
Sometimes I use different languages, but when I do, I consider very carefully whether the (rather large) tradeoff that presents will be worth it in the long run for what I'm getting in the short term.
A question: when people talk about microservices, how small are they talking?
This is a crucial point. 'We use microservices' does not say much unless you can describe how you design consistent and granular service interfaces. Otherwise you most probably just produce microservice spaghetti.
Don't get me wrong, I love microservices and will try to use them in all my future work, but I think where people often go wrong is that they over commit to it without realising the downsides. I often see people casually using AMQP queues for pretty much everything, only when they need the worker service to talk back to the originating service that they realise they've made a wrong architectural decision.
* who cares if you're deploying 20 services or 1 when you run your deploy script? * who doesn't have an operationally focused dev team? * who isn't already dealing with interfaces even if not micro services? * since when do micro services have a monopoly on distributed architectures or asynchronicity?
I can't agree testing is harder either. Too much fluff in this article to read much value into it.
For example, breaking changes are breaking changes in either system. It's not an issue of architecture style, it's a matter of business needs changing, and thus protocols change. A change in protocol breaks the existing protocol.
We understand this intuitively when you are talking about a de-facto protocol like HTTP, but we seem to think our own programs are somehow different. They aren't.
Architecture is about taking the essential complexity of a problem and creating components and protocols to solve a problem in a way that makes the most sense for the team trying to solve it. Monolithic apps or microservices then should be a question more of what your team is going to execute well as much as it is a question of which structure more elegantly solves the problem at hand.
These things are not new, but like so many other ideas, they're just old ideas re-appearing in a context that had forgotten about them.
Traditionally, many of the potential problems the author relates have been solved by architectural conceits. For instance, standardize on a programming language and datastore, then share all persistence-related files. (I'd strongly suggest a FP language, preferably used in a pure manner). Then you've decreased the "plumbing" issues by a couple orders of magnitude, lowered the skill bar for bringing in new programmers, and you can start talking about using some common testing paradigms to work on the other issues.
I'm a huge fan of microservices, but it's good to talk about the bad parts too, lest the hype overrun the reality.
Its a bit pop-sciency, but its very approachable and well explained.
it's hard to know exactly what they class as "Athletic trainers". For Olympic athletes, there will always be staff. For the rest of us, what would that "unlikely" automation of the coach look like? Perhaps it would start with a movement sensor on the wrist? A smartphone app to track and recommend exercise?
That's not looking unlikely to me, it's looking like it's already here in plain sight:
Instead of waiting two to three days for a piece of postal mail, we're annoyed if that email takes two minutes. I'm not going to moralize about "instant gratification" as if it were wrong, because it's mostly not conscious and it's not a moral issue; we're just being neurologically retrained to resist delays. From a website, 10 seconds means "never": it's down, or in an unusable state. We're also (some of us, at least) at a ridiculous level of comfort; we have people who program their garages to heat up 30 minutes before they leave for work, because they can't stand the thought of 45 seconds' exposure to winter cold.
What's not becoming instant is human learning. If something can be learned quickly and will become rote, we can now program a computer to do it. So the things that we need humans for tend to be those that require subtlety or experience. That hasn't gotten faster. It still takes 6+ months before someone is good at his job. That's not a new problem. It's just less tolerated because people are more primed to expect instant results. So we're seeing an aversion to training people up into the better, more complex jobs that technology creates.
If they're wrong, then they'll be replaced by algorithms that are better at predicting economic trends than they are.
If they're right, then...
Eg. Joe can't afford X due to lack of employment, and Bob can't afford Y for the same reason. Assuming Bob can supply X and Joe can supply Y (or some more complicated network of bartering), it seems like should be possible for an economy to arise.
At that point there will be no corporations or anything else related to our society because it will break the premise society is built on - that we are better off working with each other - someone in control of such AI doesn't need anyone else - it's impossible to guess what they would do with it but a logical choice would be to eliminate potential threats (ie. anyone who can develop similar technology or compromise it)
Once someone kicks off a "seed AI" that can develop/replicate fast enough it's game over for the rest, they win. And note that by AI I'm not talking about Terminator style self-aware machines - I'm talking about a problem solving device capable of performing given tasks.
As for the substance of this interesting submitted article, the historical facts are reviewed in a key paragraph before the article goes off into speculation about the future: "For much of the 20th century, those arguing that technology brought ever more jobs and prosperity looked to have the better of the debate. Real incomes in Britain scarcely doubled between the beginning of the common era and 1570. They then tripled from 1570 to 1875. And they more than tripled from 1875 to 1975. Industrialisation did not end up eliminating the need for human workers. On the contrary, it created employment opportunities sufficient to soak up the 20th centurys exploding population. Keyness vision of everyone in the 2030s being a lot richer is largely achieved. His belief they would work just 15 hours or so a week has not come to pass." The nub of the article's argument is that new forms of technological change might not leave us with any new forms of gainful employment.
After its interesting text discussion and chart predicting what kinds of employment are least likely to be automated out of existence, the article points out one difference between the world of the past and the world of today: "Another way in which previous adaptation is not necessarily a good guide to future employment is the existence of welfare. The alternative to joining the 19th-century industrial proletariat was malnourished deprivation. Today, because of measures introduced in response to, and to some extent on the proceeds of, industrialisation, people in the developed world are provided with unemployment benefits, disability allowances and other forms of welfare. They are also much more likely than a bygone peasant to have savings. This means that the 'reservation wage'the wage below which a worker will not accept a jobis now high in historical terms. If governments refuse to allow jobless workers to fall too far below the average standard of living, then this reservation wage will rise steadily, and ever more workers may find work unattractive. And the higher it rises, the greater the incentive to invest in capital that replaces labour." Indeed, it may be that the funding of governmental benefits will become secure enough through rising productivity that many current workers will have children who do not need a job at all.
De-regulation subverts democracy.
Time to revisit the relationship between capital and well-being. Ricardian theories of comparative advantage drive wealth into the hands of those who control capital, not into the calloused hands of the poor suckers who sweat.
This is yet another in a series of economic articles that sound much more like a typical op-ed column than an observation, hypothesis, or proof.
But with the "download here"-button ads, there is no way to know what button to press anymore. Now installing Adblock is a requirement.
194MB for a single webpage that should mainly be text communicating a message. Does anyone else than me find this crazy?
UPDATE: Here is the bug to track: https://bugzilla.mozilla.org/show_bug.cgi?id=988266
There are lots of gray boxes now all over the web, but I prefer them over resource-hungry attention-grabbing Flash ads.
 - https://learn.adafruit.com/raspberry-pi-as-an-ad-blocking-ac...
Interestingly, there is a uBlock alternative which I had not heard about. If the more efficient claims are true, it would definitely be worthwhile to switch over to that instead.
Furthermore if sites are paid per click my pre AdBlock contribute to all the ads sustained web sites was probably way less than $1. They are losing little by my use of AdBlock. One reason I didn't click ads was precisely that they are so intrusive and ugly. Thanks to AdBlock I'm spared with that and they don't lose bandwidth and CPU to serve ads that won't be clicked. Win-win.
Edge cases are not completely ironed out but if people want to help perfect it I'll very happily accept PRs !
Do all designers work on their iExpensiveMachine and don't care about other peoples lower-end machines/phones/tables/whatever?
12961 XXXXXX 20 0 2615960 1.483g 58692 S 45.2 9.5 2:13.00 firefox
13098 XXXXXX 20 0 2157212 1.109g 57476 R 142.6 7.1 1:26.25 firefox
First I was using this  hosts file by Dan Pallock. But now I've switched to this Neocities  hosted site. I don't know who manages this site, but the entries are uniquely sorted from various sources, which includes entries from Dan Pallock's hosts file.
Apart from this, I use Privacy Badger , Self-Destructing Cookies , HTTPS-Everywhere  and Disconnect .
I wrote AdLimiter, (adlimiter.com) which eliminates certain ads from Google search result pages. It's really a demo for our site rating service, SiteTruth,but it has the internal machinery to find and remove ads. The basic idea is that it finds links to sites in the DOM, and works outward in the DOM to find the ad boundary. Then, if analysis of the link indicates the ad should be deleted, the offending section is deleted from the DOM. It takes one linear pass through the DOM to find ads. If the page changes, five seconds after the changes quiet down, another pass is made. This approach is reasonably general and requires little maintenance.
I once looked at AdBlock's code. Internally, AdBlock makes heavy use of regular expressions and does a lot of searching. It seems to be doing more work per page than should be necessary.
In some ways, Ghostery is more useful than AdBlock. It blocks most trackers, which reduces network I/O. Some ads disappear once their tracker is disabled. (CBS TV shows play without commercials if Ghostery is running.)
I've been toying with the idea of an ad blocker that uses simple machine learning. All content coming from off-site links gets a light grey overlay. If you click on the grey overlay, it disappears so you can view the content, and the off-site link is rated as less spammy. If you ignore the overlay, the off-site link is viewed as more spammy. The grey overlays gradually get more opaque over areas where you never remove them, and when they go fully opaque, the covered content is deleted completely.
You don't need an add-on for that. Firefox and Chrome both have this as an optional built-in feature. I think it should be on by default (which it sadly isn't).
>"* The main problem, though, is the process by which ABP actually blocks ads. Basically, ABP inserts a massive CSS stylesheet occupying around 4MB of RAM into every single webpage that you visit, stripping out the ads.*"
Isn't then the actual problem the way that Adblock+ removes the ads? Why not simply allow an API for a plugin to easily strip content from a site. If there is one already, then ABP should switch to it to reduce memory usage. If not, well then we should look at the actual culprits of this problem which is the browsers.
Under "Filter Preferences", uncheck everything marked EasyList. Under "Custom Filters" add EasyList without element hiding rules and make sure it's checked. Restart. This speeds things up considerably.
On Chromium I don't use AdBlock Plus, it's too much of a memory hog. Instead, I use the much leaner and faster Block for dynamic filtering:
Turning off JS instantly removes a ton of annoyances, effectively removes a huge attack area for browser exploits (I've accidentally infected others by sending them links to sites that had no effect on me...), and while there are certainly sites that require it (often not even of the "application" type, but just to do something that could've been done without), the majority of the ones I come across in e.g. searching for info don't need it. If a site I find when searching refuses to show anything, then I'll just go back and continue with the next search result or use Google's text-only cache which often does have the content I'm looking for in a more readable format. The whitelist is reserved for sites that are both highly trusted and absolutely necessary to enable JS on.
All browsers became huge memory hogs lately, but I'd rather have 100MB+ of RAM just for Adblock than blinded by ads (of which a lot are malicious).
Consider the technical solution alternatives, and getting people to use them. Now look at how easy it is to install ABP.
http://someonewhocares.org/hosts/HN Thread: https://news.ycombinator.com/item?id=6002544
Make sure localhost / 127.0.0.1 resolves to something though. A blank `index.html` in the root is lightweight enough. For the perfnerds, you can resolve it using lighthttpd
Now this ET site with 28 spyware modules from random advertisers is telling me - "Don't use your CPU it will make it slower, also don't use too much RAM. Instead make your brain slower and use more brain memory for useless stuff. Because you know - think about children, and because we are entitled for our business model by birth right.".
1) Website owners use ads to earn money (server and employee costs, revenue).
2) Several Ad networks create ads that influence and annoy users and track their browser habits. e.g. blinking buggy video ads with sound that sometimes even crash the flash plugin
3) Visitors install Adblock+ because they dislike intrusive ads that don't adhere to the browser privacy settings. Some websites are unusable without an ad-blocker. Many websites even crash the mobile browser.
4) Companies buy ads cheaper as ads are clicked in reality by clicking-bots and are worth less than years ago.
4) Website owner places more ads because the earn less for the same amount of served ads as last year and because more and more block all ads altogether.
5) Even more visitors install adblockers and turn them on for all websites.
6) Website owners introduce a paywall so that parts of their content or everything is behind closed doors for paying customers only.
7) The visitors switch to alternative website that offer comparable content for free, monetized by ads or maintained as hobby.
8) and so on...
Paywalls are not the solution. Another website will appear that offers comparable content for free. Example: The Microsoft Network (MSN 1) that was meant to replace the WWW in 1995 and shipped with Win95. Bill Gates had to rewrite his vision of the information highway, micro payments and Microsoft Network in his famous book "The Road Ahead", half a year later as MSN failed and WWW succeeded he released a second book edition that removed all references and now acknowledges the WWW. Pictures of MSN 1: http://winsupersite.com/windows-live/msn-inside-story
Everyone has to change a bit, to improve the current status:
* The ad-networks that still code flash-based ads should move to HTML5.
* The companies that buy ad-space from such ad-networks should care more about the ad-quality.
* The website owners should care more about which ad-network they select.
* The visitors should use less adblock-plugins.
* The WHATWG/W3C/browser developer should improve some situations with HTML 5.x so that there is no single reason for flash based ads.
because ad block won't work on mobiles, and devices owned more by Google/apple than yourself.
if you want to block malicious scripts and plugins you should use noscript.
both will actually improve memory use and performance, not to mention getting closer to solving the problem in the first place
I'm using RequestPolicyContinued. Using a whitelist for third-party requests seems to do a lot of the heavy lifting so that my Adblock Plus filters are limited to a handful of specific things I don't want to see.
Remember: ABP = Shit. AB = Good.
Now that's real irony.
A quick `ps aux` will let you see what extensions are the worst offenders; clearing out a bunch of unused extensions helped restore my computer back to normal.
I suggest try HTTP Switchboard instead (see the previous thread).
So I usually just block ads that are really obnoxious. Every few months I review the list and remove any entry that was filtered less than 10 times or so.
That has two advantages. First I get to control what is filtered, and second it prevents that list from growing so much.
Works across all applications, e.g. browsers, apps, and requires no installation of third party software e.g. extensions.
I'm curious: do these ad blocking "solutions" operate as businesses? Do ad servers pay to be removed from blacklists? Pardon my ignorance.
A lot hinges on modern auto-update pipelines.
this would explain why i've never considered chrome to be a memory hog.. (although, all my machines have a lot of memory, too)
I'm using https://github.com/gorhill/uBlock to replace Adblock plus since last week.
No worries, until those video and audio ads got out of control.
Some pages would bury my machine! And it's a good machine too. 2012 Mac Book Pro.
The memory profile of Firefox itself is kind of a pig, but not something I would worry too much about. Firefox with those ads? Out of control. Firefox with simple Adblock is back to no worries again.
I'll have to give uBlock a try.
Its like ten thousand spoons when all you need is a knife.
If you use Chromium I recommend HTTPSwitchboard.
Let me fix that for you:
> On a shit website, there can be dozens of iframes.
window.hasOwnProperty('google_ad_block') || (document.body.innerHTML = 'Please disable adblock to use this page')
It's a movement by several companies and groups
Nature recently joined it, so if you want to publish a paper next year with them you have to at least complete the checklist to make reproducibility easier: http://www.nature.com/nnano/journal/v9/n12/full/nnano.2014.2...
They have also started to reproduce publications of voluntaries, here's a very recent paper detailing their replication efforts: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjourna...
Article summarizing the paper: http://www.nature.com/news/parasite-test-shows-where-validat...
The reasons for shoddy reproducibility are p-value hacking, intense pressure to publish at all costs, and a premium on 'gladwellesque' results where a simple theory seemingly explains a lot.
Gellman and Uri Simon-Johnson have both written a lot about this.
Seen as a meta-experiment it is incredibly weak, with a tiny, biased sample. High-profile results that have a simple enough procedure so they can be combined.
As such, it can't get even close to support what they're (whoever -they- actually is, it may not be the actual researchers) trying to claim. What about less high-profile stuff? What about more complex setups? If 20% of your most rock-solid results are not reproducible, I wouldn't be so quick to celebrate. Imagine if 20% of basic physics or maths results weren't actually true...
This is a very well done study and almost every criticism I've read in the comments is addressed in it if you read the write up.
Also, among the ones that did, there are the Kahneman ones, and _he_ was the one to point out that most experiments are never reproduced so there was a higher chance that his results would be reproducible.
The Journal of Open Psychology Data published findings of the replication study mentioned here, and the PsycNET site of the American Psychological Assocation provides a citation to the published version of the study findings in a psychology journal. Improving replicability in science is an ongoing effort not just in psychology but in most branches of science, and is critically important in medical studies.
I would play the hell out of that game and I'd be fluent in like 8 languages after a few months. Please, somebody with access to like $10 million of funny money, build this.
This is just fluff. Sure, the guy probably learned some French, but it's anecdotal. To assess proficiency he needs to be tested by native experts in all the modalities of language: speaking, listening, reading, and writing. Not by the opinion of some girl who probably liked him at a coffee shop.
Where I am we take 6 months to get a student from 0 to basic proficiency in french: i.e. able to read/listen to news and discuss advanced topics, like economics. That's with 6 hours of class a day, M-F all with native speakers.
If you really want to learn a language efficiently, I'd recommend this ted talk: https://www.youtube.com/watch?v=d0yGdNEWdn0
BTW, I doubt his 6 month mark applies to harder languages like chinese.
I'm currently 2 years into my attempt to learn Chinese and I can say that many of these tips don't apply to all languages:
1. Listening to music won't help with comprehension of tonal languages like Chinese because songs will usually ignore the tones so that they sound better set to music.
2. Reading children's books in character-based languages like Chinese will only be helpful if you're already proficient in a few hundred basic characters. Since there's no alphabet, there's no way to sound out words the way we can in English.
Otherwise, there are some great tips here. I agree that listening to classroom discussion and hearing others' mistakes is a great way to learn. It's also important to do daily, focused practice in the mornings when your mind is fresh and not muddled by other things.
Overall I think it's a good Quora answer but not necessarily a "secret" to learning a language.
As a Canadian French is not foreign as in not a language of this nation even though I don't speak it. Maybe foreign to the person not to your nation is what's meant.
Even in the US Spanish wouldn't be even though English is the only unofficial official language. Even French is part of US languages from parts of Maine to my Acadian neighbours who went to Louisiana.
Similar to the fitness industry, those 2 barriers have spawned an industry of 'learn in your car', 'French in 30 days' and videos by polyglots who sell the idea that language acquisition is easy.
Now I'm going to tell you the hardest part of learning a foreign language.
There are no shortcuts. It takes time, it takes dedication and it will most probably cost money.
Seems like just about anyone can learn to speak French when given the opportunity to immerse themselves in a bucolic French village with nothing to do!
I hope the FAQ information below helps hackers achieve their dreams. For ANY pair of languages, even closely cognate pairs of West Germanic languages like English and Dutch, or Wu Chinese dialects like those of Shanghai and Suzhou, the two languages differ in sound system, so that what is a phoneme in one language is not a phoneme in the other language.
But a speaker of one language who is past the age of puberty will simply not perceive many of the phonemic distinctions in sounds in the target language (the language to be learned) without very careful training, as disregard of those distinctions below the level of conscious attention is part of having the sound system of the speaker's native language fully in mind. Attention to target language phonemes has to be developed through pains-taking practice.
It is brutally hard for most people (after the age of puberty, and perhaps especially for males) to learn to attend to sound distinctions that don't exist in the learner's native language. That is especially hard when the sound distinction signifies a grammatical distinction that also doesn't exist in the learner's native language. For example, the distinction between "I speak" and "he speaks" in English involves a consonant cluster at the end of a syllable, and no such consonant clusters exist in the Mandarin sound system at all. Worse than that, no such grammatical distinction as "first person singular" and "third person singular" for inflecting verbs exists in Mandarin, so it is remarkably difficult for Mandarin-speaking learners of English to learn to distinguish "speaks" from "speak" and to say "he speaks Chinese" rather than * "he speak Chinese" (not a grammatical phrase in spoken English).
Most software materials for learning foreign languages could be much improved simply by including a complete chart of the sound system of the target language (in the dialect form being taught in the software materials) with explicit description of sounds in the terminology of articulatory phonetics
with full use of notation from the International Phonetic Alphabet.
(By the way, the International Phonetic Alphabet was invented by language teachers in Europe to help native speakers of English learn French and native speakers of French learn English, so it could help the author of the article submitted to open this thread. The International Phonetic Alphabet was eventually extended to be useful for writing down any human language.) Good language-learning materials always include a lot of focused drills on sound distinctions (contrasting minimal pairs in the language) in the target language, and no software program for language learning should be without those. It is still an art of software writing to try to automate listening to a learner's pronunciation for appropriate feedback on accuracy of pronunciation. That is not an easy problem.
After phonology, another huge task for any language learner is acquiring vocabulary, and this is the task on which most language-learning materials are most focused. But often the focus on vocabulary is not very thoughtful.
The classic software approach to helping vocabulary acquisition is essentially to automate flipping flash cards. But flash cards have ALWAYS been overrated for vocabulary acquisition. Words don't match one-to-one between languages, not even between closely cognate languages. The map is not the territory, and every language on earth divides the world of lived experience into a different set of words, with different boundaries between words of similar meaning.
The royal road to learning vocabulary in a target language is massive exposure to actual texts (dialogs, stories, songs, personal letters, articles, etc.) written or spoken by native speakers of the language. I'll quote a master language teacher here, the late John DeFrancis. A few years ago, I reread the section "Suggestions for Study" in the front matter of John DeFrancis's book Beginning Chinese Reader, Part I, which I first used to learn Chinese back in 1975. In that section of that book, I found this passage, "Fluency in reading can only be achieved by extensive practice on all the interrelated aspects of the reading process. To accomplish this we must READ, READ, READ" (capitalization as in original). In other words, vocabulary can only be well acquired in context (an argument he develops in detail with regard to Chinese in the writing I have just cited) and the context must be a genuine context produced by native speakers of the language.
I have been giving free advice on language learning since the 1990s on my personal website,
and the one advice I can give every language learner reading this thread is to take advantage of radio broadcasting in your target language. Spoken-word broadcasting (here I'm especially focusing on radio rather than on TV) gives you an opportunity to listen and to hear words used in context. In the 1970s, I used to have to use an expensive short-wave radio to pick up Chinese-language radio programs in North America. Now we who have Internet access can gain endless listening opportunities from Internet radio stations in dozens of unlikely languages. Listen early and listen often while learning a language. That will help with phonology (as above) and it will help crucially with vocabulary.
The third big task of a language learner is learning grammar and syntax, which is often woefully neglected in software language-learning materials. Every language has hundreds of tacit grammar rules, many of which are not known explicitly even to native speakers, but which reveal a language-learner as a foreigner when the rules are broken. The foreign language-learner needs to understand grammar not just to produce speech or writing that is less jarring and foreign to native speakers, but also to better understand what native speakers are speaking or writing. Any widely spoken modern language has thick books reporting the grammatical rules of the language,
and it is well worth your while to study books like that both about your native language(s) and about any language you are studying.
A special bonus for learners of French (which I have used) is that many classic French literature books (novels, collections of short stories, collections of essays, etc.) are now in the public domain, and are available as free-of-charge ebooks. You can practice a lot of reading French with resources like that, and relearn classic tales you knew in youth. Similarly, today there is boundless free audio, for example in the form of online movies and streaming news broadcasts, in all of the major world languages. Take advantage of that as you learn.
Yes, some people can work hard and retain things throughout. Beethoven could remember an entire symphony from listening to it once, but that hardly translate to anything useful for normal music students.
The actual solution is much simpler (to understand and to use, I imagine) and that made me smile, nice one!
It is very common to use Erlang as a "glue" system between other subsystems in larger installations. You will often find the Erlang system controlling code written in other languages, for different reasons: The large Java subsystem which is hard to replace. The C/C++ code you run as a hidden node() in the Erlang distribution cluster. The C-accelerated function you added to the Erlang BEAM VM through dynamic loading of a .so (called a NIF). The OCaml or Go program you communicate with through a port (essentially an Erlang-controlled pipe with a proxy light-weight process inside the Erlang VM). And so on.
Some times, the right tool for the job is another system. The tool 'py' provides yet another such bridge which allows you to interact with Python programs in a very direct way.
"But something more potent than alcohol was needed for the X-15 rocket-driven supersonic research plane. Hydrazine was the first choice, but it sometimes exploded when used for regenerative cooling, and in 1949, when the program was conceived, there wasn't enough of it around, anyway. Bob Truax of the Navy, along with Winternitz of Reaction Motors, which was to develop the 50,000 pounds thrust motor, settled on ammonia as a reasonably satisfactory second best. The oxygen-ammonia combination had been fired by JPL, but RMI really worked it out in the early 50's. The great stability of the ammonia molecule made it a tough customer to burn and from the beginning they were plagued with rough running and combustion instability. All sorts of additives to the fuel were tried in the hope of alleviating the condition, among them methylamine and acetylene. Twenty-two percent of the latter gave smooth combustion, but was dangerously unstable, and the mixture wasn't used long. The combustion problems were eventually cured by improving the injector design, but it was a long and noisy process. At night, I could hear the motor being fired, ten miles away over two ranges of hills, and could tell how far the injector design had progressed, just by the way the thing sounded. Even when the motor, finally, was running the way it should, and the first of the series was ready to be shipped to the West Coast to be test-flown by Scott Crossfield, everybody had his fingers crossed. Lou Rapp, of RMI, flying across the continent, found himself with a knowledgeable seat mate, obviously in the aerospace business, who asked him his opinion of the motor. Lou blew up, and declared, with gestures, that it was a mechanical monster, an accident looking for a place to happen, and that he, personally, considered that flying with it was merely a somewhat expensive method of suicide. Then, remembering something he turned to his companion and asked. "By the way, I didn't get your name. What is it?". The reply was simple. "Oh, I'm Scott Crossfield."
$(".main-column").attr('class', 'large-12 columns main-column');$(".hide-for-medium-down").remove();
*we as a species
They leave out that this effect that "some of them are the same person" is inevitable, even pervasive, past a few generations. Never mind that the number of "genealogical ancestors" rapidly becomes larger then the number of humans alive at the time, it's almost certain that your * 10 grandparents didn't travel much, so existed as part of a far smaller sheltered gene pool.
I suspect that the 46,000 genetic segments is an approximation, but it should not mess up the calculations too much if this is wrong. The truly amazing thing is how few real ancestors you have out of the possible number.
The single best anecdote among many is how the statute of the Colonel, which is outside every KFC in Japan, came to become associated with the brand here. Apparently when the guy in charge of KFC Japan was put in charge of the unit, HQ didn't really think a poor Asian country merited any marketing support, but they let him use anything they had in storage. Somebody had produced several hundred Colonel statues but they were deemed too ridiculous to use. "So I said 'I'll take all of them.'" "And did you know they would be a hit with Japanese people?" "No clue, but something is better than nothing."
(Also worth mentioning: the guy's right hand man, who was Japanese, gets into an argument with him over whether selling KFC as "aristocratic American elegance" to Japanese people is exploitative, since, quote the American executive, "We sell fried chicken to poor people." "Maybe you do over there, but we won't over here." Topics in the discussion included cultural appropriation, memories of the war and associated famines, and the desire of an emerging economic superpower to consume things associated with wealth, like meat.)
The easiest way to find the video is probably your local library or university library, as it was made in the early 1980s.
The first Kentucky Christmas meal sold for a pricey $10 (almost $48 in 2014 money) and contained fried chicken and wine; now, KFCs Japanese Christmas meals cost about $40 and come with champagne and cake.
Similarly, I suspect the "Chicago Town Pizza" I see being advertised over here as a real slice of American life, would make a true Chicagoan weep.
I think most details are lost in translation, or the level of seriousness of things ("Is Mall Santa the real Santa?"/"Do Santa Actually exist?") coupled with a certain American exaggerated feelings of self-importance and self-centeredness
Or, as someone told me, a kid was told "this is not the USA, we don't celebrate Halloween!". This was in Ireland
I came back later that night and there was a sign outside indicating that I needed to have a phone order (maybe) to get in the queue.
KFC has made fried chicken popular on Christmas in Japan, but now you can get fried chicken everywhere, including the seven-eleven and Family Mart stores that have Christmas packages.
Someone mentioned alcohol at fast food chains. It's only at special unique fast food restaurants. Drinking in Japan is strange, for example if I live a club with a class of alcohol in my hand, the door person will pour it into a plastic cup for me to drink out on the streets.
KFC is also really popular in Vietnam, which I was surprised to discover when I visited about 10 years ago. I think the uncanny resemblance of Colonel Sanders to Ho Chi Minh might have something to do with it.
It sounds like a movie plot from
The format string semantics are actually pretty tricky because printf will perform conversions in certain cases.
data Format = FInt Format | FString Format | FOther Char Format | FEnd
data FormatPart = FInt | FString | FOther Char type Format = List FormatPart
Neither paper has data how much of your dietary behavior is microbe related, which is a shame. Here's the data I have from developing a pseudo-FMT probiotic:
During initial beta testing, I found myself snacking less and forgetting to eat. The forgetting to eat bit was extremely abnormal, as I'm usually very regular about meal times. I also found myself eating breakfast for the first time since starting at college.
These and the experiences of my co-founder were sufficiently interesting that we added questions to our pilot study, where 72% of participants reported a reduction in food cravings and mild weight loss.
So the effect is much bigger than you might imagine. Here's a customer describing his rather extreme subjective experience:
"I started a total fast on Monday, and for the first 3 days only consumed water or green tea (and your probiotic), and today I only had BCAAs and a plain salad with vinegar, and so far this has been by far the easiest fast I've ever done. I haven't even gotten to the point where I feel physically hungry yet, and even my psychological desire for food is much weaker than it has been in past fasts."
If anyone here is interested in a longer discussion of health related microbiome research, http://www.generalbiotics.com/science which my co-founder wrote) provides a fairly through overview.
Where is the hypothesis that states of hunger and states of anxiety are "signaled through the same mechanisms" (sorry, no time to locate all the long names) and, that the state which in other languages is called "being full" is gives "relief" from anxiety and stress. In other words, overeating is just a misuse of food as an answer to increased stress.
Binge eating is not like binge drinking of a stupid youth, it is like "aged" alcoholism of "staying stable" with a glass in each two hours. And it is much closer to tobacco smoking than to drug abuse (there is a nice course "Drugs and Brain" on coursera, btw).
Sometimes I wish I could be one of these "meme"-scientists claiming that they have found oversimplified single cause (it is microbes, like fly, stupid!) for a vastly complex, multiple-causation phenomena. Of course, microbes have some influence over host's eating behavior, via described neuro-chemical feed-back loops, but of course this is not a "single" or even the "most powerful" cause.
"Change of social norms" was a good insight - now it is OK to live this way.
I've re-watched "The Wire" about 3 to 4 times already...the new HD release is already on Amazon Prime Instant Video and I don't think I would've noticed if the news hadn't been posted. On The Wire subreddit, someone posted a few more comparisons:
If you're new to The Wire, I suggest watching the HD versions first, in spite of Simon's compelling arguments to the contrary. The Wire just has so many disorienting things about it to new watchers -- the barrage of police lingo, the high number of black to white characters, the 90s-era technological timeframe -- that the 4:3 ratio's fidelity isn't worth the mental bias you might have that associates 4:3 with "cheap" or "old"...the show can already be difficult for people to get into.
Speaking of the r/TheWire subreddit...a year ago, a sound editor did a Q&A...if you're interested about the minutiae of the show, and curious about the technical details of sound mixing is done...it's one of the best things I've ever read about the show:
Is he obliquely saying that the 16:9 version of the wire is going to be like colorized black and white films?
For the video enclosed in the article: warning for spoilers and mild gore.
Basically using the 16:9 HD version as some kind of Open Matte source material, if there are no shifts between scenes this should be trivial.
I was afraid it wouldn't work in 16:9, and thought I would prefer it to be kept in 4:3 like the Star Trek TNG HD remasters, but I actually prefer the new 16:9 ratio over the old ones.
Ideally I would prefer to have an option to draw paths with either quadratic or cubic Bezier segments, but edit paths normalised to cubic Bezier segments.
I'm always a little geeked-out by unification like this. :D
This is one of the sites that contributed to that work. So strange to see it appear out of context here.
Renault & Citroen feels like they were design powerhouses in this context, with numerical industrial design.
The curves were like compressed specifications, instead of being "make it look like this" with a sculpted model. That said, cars from the same era built by aerodynamics engineers also resulted in beautiful curves (like a Jaguar E Type).
For blocking tracking, the most effective tools are browser extensions made to block ads. Ghostery provides comparisons on an non-biased website between the methods of blocking tracking through browser modifications . According to the site, the Do Not Track header actually has an effect with a difference of 18% in cookie size when the header is set. AdBlock Edge and disabling third-party cookies results in a 59% and 40% decrease in cookie size respectively. It seems that the easiest thing you can do to lessen your internet footprint would be to disable third-party cookies and enable the DNT header, and the majority of tracking can be eliminated through the use of a browser extension. (But with the recent revelations , using a browser extension may actually reduce your browsing experience if you don't have the RAM to spare.)
 http://www.areweprivateyet.com/ https://news.ycombinator.com/item?id=8802424
Now, granted, it's technically far inferior to a DNT header (it sets a cookie on each ad network domain) but as far as I can tell it works and has worked for years.
If we are goign to get something like Do Not Track, then it should have been drafted out of the public eye, had a nice short period for public comment and then recieved some sort of backing in law. Speculative implementations didn't really help.
I'm not too familiar with the laws surrounding things like 'do not call' lists and anti-spam measures, but some sort of system from that area of law could surely have been a part of DNT.
The only real solution is client-side, and we have that technology now: hosts-blocking, Ghostery, AdBlock, etc. If enough people cared, it could be enabled by default on new browser installs.
Incidently Jeff Hawkins was the first to work on Palm software.....you can say early versions of today's smartphones.
Coming back to brain science , I somehow feel it's more to do with complexity (Santafe.edu).
I am waiting for jeff Hawkins to launch a university like santafe.edu ( complexity) which works exclusively on brain science. Perhaps that would increase application of brain science.
I encourage anyone interested to consider supporting Joerg, Werner, Sebastian and everyone else working on this project!  100 euro now and at the very least you can have a really cool device to hack around with.
Tobias Engel demonstrates (amongst other things):
* How to find out the phone numbers of nearby cellphones
* How to track the location of any cellphone worldwide that you only know the phone number of
* How intercept outgoing calls of nearby cellphones (to record and/or re-route to a different number)
But on a serious note, conference organizers should play close attention to how CCC does stuff and replicate it. The pre-talk on screen information is amazing and useful.
The key point I heard, and I'll embellish a little further; -- legislators are passing laws like DMCA thinking they are merely trifling with entertainment options, but they are mucking with critical infrastructure, the central mediating artifact of our lives, maybe even the platform for our existence. Tread lightly.
Lets be clear here, CD is primarily a science fiction writer (feel free to look at info up yourself), not a programmer/engineer (like Richard Stallman), not a researcher (like Michael Geist), not an activist (many many examples we all know) or quite frankly anyone of any relevance. He's the new breed of self-aggrandizing web whore that gets himself shoe horned in "tech" sites or simply via his dumpster blog Boingboing. To stay relevant he takes popular internet news (say gamergate) and takes the most hardlined politically-correct stance on it. A great example of this would be how he recently co-authored a fictional book about a "female gamer". I mean... comon...
For the sake of the internet please ignore this person and support people who make a real difference. (I fully expect to be voted down for this by not brainlessly applauding these hipster heroes)
"That's particularly useful in places where street numbers are otherwise unavailable or places such as Japan and South Korea where streets are rarely numbered in chronological order but in other ways such as the order in which they were constructed, a system that makes many buildings impossibly hard to find, even for locals."
South Korea finished renumbering streets in 2011 and after two and a half years of trial completely switched to the new system in January 2014.
I think there is currently no mapping system that handles this madness. Google Maps still does a decent job if you're looking for a specific place, because people have reported the exact gps positions of most businesses through user-reporting, but if you enter an address with a red number, you're unlikely to be correctly directed.
I guess the neural network knows nothing of colors...
"To start off with, Goodfellow and co place some limits on the task at hand to keep it as simple as possible. For example, they assume that the building number has already been spotted and the image cropped so that the number is at least one-third the width of the resulting frame. They also assume that the number is no more than five digits long, a reasonable assumption in most parts of the world."
This seems like a huge task. Someone has to go through all the thousands of images and first crop them? During that time, it would seem like they could just input the number into a database.
Maybe I'm missing something, but I read the "cracked" part to be a totally automated system that scans all the pictures and pulls the numbers with no human manipulation.
Instead, we keep a local cache inside the web page. This cache maps URL to JSON payload, and our wrapper on top of XMLHttpRequest will first check this cache, and deliver the data if its there. When we receive an invalidation request over IMQ, we mark it stale (although we may still deliver it, for example for offline browsing purposes.)"
 A cache-buster is typically a random value (or timestamp) included as a query-string argument on the URL. This value has no meaning to the server, but because the URL string has changed the browser can't make any assumptions about what the return value would be, and is therefore forced to make a request to the server.
EDIT: I think I see now - what they refer to as "the cache" is more complex than I had thought; it seems somewhat more analagous to the state in a React app, in that it is triggering UI updates and isn't just a dumb layer between the client and server.
I've also been working on a protocol definition for REST updates called LiveResource. Anyone interested in this problem space, please send feedback. :)https://github.com/fanout/liveresource
I still don't quite know when I should use graph database but I imagine for social networking type of websites it is a must (since a standard RBDMS or NoSQL gets too verbose).
George Lucas on the other hand also directed "Indiana Jones - Temple of Doom", which portrays Hinduism in the most condescending of ways.
Yoga is detached from its roots,http://www.hafsite.org/media/pr/takeyogaback
And other anti-Hindu propaganda in CA,http://en.wikipedia.org/wiki/California_textbook_controversy...
I'm drawn to believe that ideas usurped from Hinduism,Buddhism tend to be appreciated more, when portrayed as some new-age quasi-spirituality by a Caucasian.