from here: https://www.samsung.com/uk/info/privacy-SmartTV.html
So, disable it. I don't understand everybody's fascination with voice recognition. I don't find it more convenient at all. I'd much rather just push a button. It's really not that complicated.
I would love to see a TV vendor prosecuted for this.
Now it is fair to say that the attack I just described requires the ability to MitM the network and have physical access to the device, however, remember that these TV's use an IR remote & all an attacker needs is visual access to the TV. If it can be seen through a window it can be controlled through a window and these things typically don't require a password to modify the WiFi settings. Some smart TVs also have proxy settings which again, typically don't require a password to modify.
Given what I just covered, think hotel. From a risk perspective that's what I'd be most worried about. I wonder how many are installing smart TVs with voice recognition? For all other scenarios basically the situation in many cases on the ground is that you are secure because no one is targeting you. In the case of a hotel, someone could be targeting everyone. Such an attack could prove valuable, especially if done in executive suites near financial centers.
(Nuance/Apple Siri, Microsoft Cortana, Google Now, IBM Watson Speech, Amazon Echo, LG-Smart TV, etc.)
From a consumer perspective you want an offline speech product like Nuance Dragon NaturallySpeaking: http://en.wikipedia.org/wiki/Dragon_NaturallySpeaking it's the same technology that powers Nuance cloud based products like Apple Siri, IBM Watson, etc.)
Submitted: https://netzpolitik.org/2015/samsung-warnt-bitte-achten-sie-... which links to http://martingiesler.tumblr.com/post/110325577280/samsung-wa... which links to http://mostlysignssomeportents.tumblr.com/post/110300533107/... which links to http://boingboing.net/2015/02/06/samsung-watch-what-you-say-... which links to http://www.reddit.com/r/technology/comments/2uuvdz/samsung_s... which references https://www.samsung.com/uk/info/privacy-SmartTV.html
On the other hand, the HN rules suggest doing things like this if you want to cherry pick a certain aspect of a page...
Not ideal but doesn't strike me as a big risk
Bitcoin, by contrast, is interesting technology but with weaknesses that make it unsuitable as something that regular people interface with every day (security implications, mostly). It is also economically problematic (widespread use of Bitcoins would mean regressing back to gold-standard times).
Ideally, though, Bitcoin can play a useful role by putting enough pressure on other payment systems to remove any remaining suckiness (mostly the fact that existing payment systems are very bad at international and cross-currency payments).
Here, you "kill" real people with a face. Objective? Motivation?Kill him before he kills me?
I'm deeply concerned about this and find it disgusting.
edit: Wow yeah, here's a look at the 3.3V power line when you flash the board, it drops almost down to 0V and then wildly fluctuates for about 100 nanoseconds: http://imgur.com/hG86pRy
edit 2: Another interesting measurement, with the board _totally unplugged_ and flashing it you can see a big voltage spike on the 3.3V rail. Up to 6-7 volts or so for a few nanoseconds: http://imgur.com/td262QK
I guess not only can you learn about electronics but also Einstein's photoelectric effect with the Pi 2!
But as soon as the demo for the press started, the machine crashed. The management was upset. Later, the reason was found to be some old EPROM chips that are erased using UV light, and the photographers' cameras had strong flashes that went through the tapes covering the "window" on the chip. This caused the program memory to be corrupted when a photograph was taken.
Another legendary debacle triggered by light hit at a highly publicized affair thrown by IBM, ironic considering that IBM is the master of the seamless image. D. E. Rosenheim, who helped develop the IBM 701, the first mass-produced modern commercial computer, recalled the famous faux pas, which occurred when the company held a dedication ceremony for the 701s installation at its New York headquarters. Top-level executives, the engineering team, and a gang of reporters crowded the ceremony room
Things went pretty well at the dedication, said Rosenheim, until the photographers started taking pictures of the hardware. As soon as the flash bulbs went off, the whole system came down. Following a few tense moments on the part of the engineering crew, we realized with some consternation that the light from the flash bulbs was erasing the information in the CRT memory. Suffice it to say that shortly thereafter the doors to the CRT storage frame were made opaque to the offending wavelengths.
Those who do not know their history are doomed to repeat it.
Any experienced engineer will have a Spark Generator (Car Ignition coil, spark gap and short Dipole) to test to see if his latest project misbehaves when confronted with Impulse Interference.
As an EMC Investigator I would always carry a spark generator to demonstrate to newby engineers why EMC Compliance is so important.
I've seen a spark from 50ft away crash or reset a microprocessor system. Just the static discharge from walking on carpet is often enough.
Try some laser pointers, especially towards the blue end of the spectrum where the photons have more energy. You may be able to trigger this effect by pointing at a specific IC.
The fix is simple: apparently, you just have to cover U16, which controls the power supply.
If there's anything in this world noisier than a spark gap, I don't know what it is.
I think the first radio transmitters were spark gaps.
The energy flies thru the air, and is coupled onto the power line.
The power supply doesn't cope well with the oscillations, and hiccups.
I see the notes about U16 being photosensitive, but if it is a black epoxy like most IC's, I'm not buying that light gets into it.
It's possible that blue tack shields the EMP a bit.
btw: anyone tried to light-freeze other devices (banana, orange, cubie, etc) ?
A laser (no EMP!) shone on that chip will also crash the Pi.
In light of Heartbleed and Shellshock, I propose calling this the Photon Torpedo vulnerability.
Makes me sad because I'm imagining a Raspberry Pi 2.1 release in the near future now...
Or to look on the upside, the Pi now comes with a free photodetector.
And if it is light sensitivity then it should be tested with a bright continuous light
So the strategy today's developing countries should be following is to find new frontiers in science, technology, entrepreneurship to create wealth while in parallel trying to provide basic facilities to their people. Developing countries are in some ways like startups- Perpetually strapped for cash and resources, struggling to stay afloat and facing tough odds. The key for them is not to try to compete in areas where others already dominate, but to disrupt them (by trying drastically different approaches) or to seek new fields. Microsoft didn't try to compete with IBM in mainframes, they went for the then-burgeoning PC market. Apple is the world largest corporation not because it competes head-on with Microsoft in the PC market, but because it disrupted mobile. Similarly, space is a good avenue for India to compete in, where there are few incumbents and where India can exploit its natural advantages (such as it's eye for cost-saving and huge, inexpensive talent pool).
Updated: Edited to removed lines that detract from main point.
Europe has been trying to push Vega  as the European offering in this market. It's exciting to see how the launcher space is developing, especially for small payloads. I know a few startups that are targeting this space because of studies, like undertaken by SpaceWorks , that point at the expected explosion within the coming 5 years.
Given that I'm working on space debris risk mitigation at the moment, I'm looking at this from a somewhat different perspective. Most small-satellites to date have been launched to low enough orbits that they can meet the 25-year de-orbit guideline without too many issues. With the commercial market rapidly expanding though, there are a lot of applications that require higher orbits, and that's when space debris becomes a huge issue. Keeps me in a job!
All in all, great news for ISRO, and hopefully a sign of more international collaboration and commercial expansion in the years to come.
 http://www.sei.aero/eng/papers/uploads/archive/IAC-14.E6.1.3... PDF)
 http://www.sei.aero/eng/papers/uploads/archive/SSC14-I-3_v1.... (PDF)
Suddenly I realize the importance of the Google investment in SpaceX to launch 700 internet service satellites. Surely those could include cameras. Will we get realtime Google Earth?
 http://yourstory.com/2015/01/team-indus-from-india-wins-goog... http://www.teamindus.in/about-us/ http://www.sasken.com
[Trusting Trust]: http://cm.bell-labs.com/who/ken/trust.html
Sure, in theory, a perfect KTH scheme would be undetectable, since it suborns every means of detection. But in practice it often wouldn't. A KTH virus would have to anticipate all tools which may be written to detect it, and given the modern open and closed source software world the complexity would explode.
I get that they don't care about public opinion now - but it's still very frustrating when this sort of thing happens. It wouldn't kill them to stay open and close registration and then offer automatic migration to the service once it's integrated into Google - the user base gets preserved this way which is another win.
Is there such a service/program that does this?
How hard would it be to get 3 million people to sign up for something at $5 a year? I imagine that at that point you might as well charge $5 a month or something, since the fixed cost of getting a person to pull out their credit card is so high
1 point by SixSigma 18 days ago | link | parent
currentoor > Why is HN so interested in linear algebra lately?
me> It happens to all topics.
One topic gets voted to front page, then people fall down the rabbit hole, posting any links they hit on their way down.
Once every 6 months or so Plan 9 gets a front page hit, probably from someone getting into Go-lang. Then we see all the related papers and websites flood in for a while - Russ Cox' site, cat-v, Rob Pike Interviews, Utah2000, The birth of UTF-8.
It's like the September that Never Ended.
The Story of Mel is on the same cycle.
What I was expecting was to learn how to build a computer out of transistors, you know, with a soldering iron, as I wasn't having much luck finding paying work when I was in high school.
What the course actually taught was how to write device drivers for the LSI-11 - a PDP-11 compatible minicomputer - in assembly code, hand-assembling it into octal, then entering with a keypad using ODT, the Octal Debugging Technique.
It was my only college course for which I receive a C. :-(
Mel finally gave in and wrote the code, but he got the test backwards, and, when the sense switch was turned on, the program would cheat, winning every time. Mel was delighted with this, claiming his subconscious was uncontrollably ethical, and adamantly refused to fix it.
"In any case, office reproduction began to grow very rapidly. (It may seem paradoxical that this growth coincided with the rise of the telephone, but perhaps it isnt. All the evidence suggests that communication between people by whatever means, far from simply accomplishing its purpose, invariably breeds the need for more.) "
Anyone know if there is a prior discussion?
muahahah... is alive!!!
Based on the way I read HN, some customization that I would definitely want to do are:
1. Headline navigation (mapped to 'j') - move cursor to the next headline instead of the next line
2. <Enter> / O opens the link in browser instead of the HN thread
3. Opened links get blurred
4. Quick page reload mapping and Auto reload
But this is purely based on my style of reading HN.
I also agree with few others here. Adding comments support would be so awesome.
"We made some test once, and we changed the skin of the software, to a new color. Every people said the software sounded better with that new skin. Yet we changed absolutely nothing except the color."
I'm still wondering if some graphical configuration ( such as a bright color) wouldn't stimulate the brain more, making it more receptive in general, and to sound in particular, letting people "hear" better.
The vendor claims that
1.'All audio cables are 'directional'
2.'When insulation is unbiased, it slows down parts of the signal differently, a big problem for very time-sensitive multi-octave audio.'
That's two verifiably untrue or misleading statements with reference to a data cable, putting all subjective sound-quality fluff aside.
Puts me in mind of the claims of homeopaths, etc.
There is a bell curve of purchasing mentality - from a minority who buy solely based on what's cheapest, through varying degrees of cost/benefit tradeoff, through to a minority who will tend toward whatever is most expensive. It often pays to offer something to that latter group.
Why, in the world of Digital Audio with ones and zeros, would any cable, be it silver or gold or what ever super conducting cores make any difference to sound quality?
Yes it would properly make 0.0001% ( Wild Guess Only ) speed difference due to better conductivity and less Error correction. But if everything gets decoded at the Chip level, then the cable should in theory makes absolutely NO difference in sound quality what so ever.
Please correct me if i am wrong.
Edit: just found a 2,199 mp3 player: http://www.audiovisualonline.co.uk/product/8288/astell-amp-k...
On these leads, electrons are flowing a little bit in one direction and then as the polarity of the source shifts, a little bit in the other. The whole idea of having a "direction" in a cable is just...
At one time we really had analog equipment "all the way down", so it kind of made sense to minimise distortion and loss in every individual link. Nowadays, we use digital systems where the music information is conveyed as "symbols". In this realm, a lot of the concepts from analog systems (or even the "pipe" system mentioned above) just makes no sense.
It is true that the digital information is still run over analog cables, in this case I assume we are talking about an Ethernet layer, but it has very little resemblance with the idea of a loudspeaker cable as information is packaged and translated in various ways before actually appearing on the cable.
Do you remember that funny marker pen that you could use to paint the edge of CDs to minimise effects of laser light running back and forth in the CD. That was a try att connecting a common understanding of the turntable with the new CD medium -- the idea of having a pickup that could be disturbed. I wonder onto what common understanding the idea of these cables are trying to connect, the water pipe idea is visual and easily taken but jumps a generation or two of reality.
I choose to see this as a taxation of... less gifted but wondrously more wealthy people. Actually a tax that kind of makes sense.
Of course there will be absolutely no difference with a 5 Ethernet cable.
Most interesting is that expensive analogical cables are as much as useless. During a blind tests listeners are unable to tell which cable is the "expensive" one.
I asked them a question on their website to see if that's true.
Scroll to the 2nd from the bottom paragraph staring with "We gathered up ...".
The best part about that cable is that someone took the time to determine which direction it sounds best, and the cable is marked to make it straightforward to attach it in the best direction!
I don't take my work home with me, I don't check my work email when I'm at home. It's just not worth the stress to me.
I love my job, I love my work, I feel like I'm contributing to making the world a better place -- it's just not 100% of who I am. I have a dog, a girlfriend, a handful of close friends, a few engaging hobbies, and a ton of books to read and miles to run. I'm more than my job, and once I can pay the bills, the rest of the money is just a nice to have -- but not nice enough to give up my health and sanity.
Then again, I'm extremely lucky to be in this situation, and a lot of people aren't. Some of my coworkers work long hours still, but they seem happy about it. As long as that's true,... well, whatever floats your boat, right?
We're working on computers, doing work which does not benefit from typing for N hours straight; there is no meaningful correlation between quality/quantity and hours worked.
I wish more people realized this.
I would understand if I could work at top performance 10-12 hours a day, 5 days a week but that's just not possible for me. In the end driving developers to exhaustion is worse for everyone, with subpar code that'll probably require refactoring Monday morning.
These people probably work their ass off during their 40, 60, or maybe 80 hrs on the job. So they dont understand when they ear that startups' work schedule is more relax because they cannot relate to it. However, when they leave their desk, it's over, they're up to something else and they probably even force themselves not to think about work anymore.
Startups take a relaxing approach to work hours because the (right) person who works there lives and breathes startup 24/7.
It's easy to say when you're a founder (disclaimer: I am one). But it is something I have witnessed in (good) startup employees as well. They think about it all the time.
@falcolas is right, who the hell cares how many hours in the week you spent executing your tasks? Shouldn't the time "thinking" about work be valued as much as "executing" the work? Don't we all "think" better outside of execution time?
As the founding engineer at my current startup, I have tremendous flexibility in setting my own hours but I willingly and intentionally work 60+ hours a week. Not because any manager pushes me to. Not because I even have to. Simply because I genuinely enjoy it.
Indeed, work is probably the most enjoyable thing in my life. On a given Friday, I'd rather be building products at work than watching a movie or engaging in some other leisure activity. Some of us don't have wives, children, or friendswe just want to spend our time executing.
Would Treehouse be accepting of that? If not, they're just choosing to enforce a different paradigm of work rather than giving their employees true freedom.
> "But he soon found himself working that same intense pace until his wife asked him why he was working more and making less. She suggested taking Fridays off."
So the central concept of this workplace format, around which this entire article is based, was the idea/inspiration of Ryan Carson's wife, whose full name is not even mentioned. (Her first name is Gill, but is her last name Carson? Unclear from the article.) Not that it's a purely original idea---other companies have done four-day workweeks before---but it was obviously one that hadn't occurred to this particular founder. Three cheers for Gill possibly-Carson!
> "With Treehouse, Carson said he hopes to, again, buck conventional start-up culture, and not cash out by selling the company, the brass ring for most start-ups, but continue to run it as a sustainable business."
Let's hope that also starts a trend. I'm so heartily sick of companies building a great product and actively recruiting user bases to use and love that product, only to shutter it and throw all the users under the bus when the founders achieve their real goal, which is getting the attention of Google or Facebook or whoever and getting acquihired or otherwise bought out. I know that individual founders and other startup workers will often (indeed almost always) say that they really do care about their users, but as a collective structural pattern in the way that SV startup culture seems to work, it sure doesn't look that way from afar. So three cheers for (the currently-stated intentions of) Ryan Carson!
Fast forward five years from now. There are going to be a ton of tough competitors in this space and eking out revenue growth month over month is going to be much harder. However, in five years they probably have the added pressure to start thinking about something called profitability.
The going is going to be a day of reckoning here when the harsh realities of cut throat competition set in. That just hasn't happened yet.
This assumes that your wife is not working. I've tried taking some days off like this and, in the middle of the week everyone works, so you don't get to hang out much
Now, that's fine for factory work, but as far as I know, relatively little effort has been put into testing that theory in knowledge jobs.
Efficiency is key, not some arbitrary limit of working hours.
Chances are yes, as a founder you aren't going to work just 32 hours a week. But it also depends on the state of the company.
And quite frankly, sometimes you can't solve problems by sitting at your computer or even talking to others in the office. Sometimes it involves taking a break and chilling out or exercising.
Too bad they don't have a need for a front-end engineer right now. I would be all over that.
Keep up the good work, guys!
I ask because I would love to implement something like this, but we get requests for service or user questions every day - and a three day turn around time on a user issue is terrible customer support - especially if they have other work riding on it. I realize treehouse is different in this respect.
It seems like the more employee focused you are the less responsive to customers you can be.
So if this perk gets Treehouse talent that is +30% more productive, even if they lose -20% of productivity from Fridays off, they still win.
One caveat, so much of programming is loading things into your head, I think three days off every week would be difficult for anything sophisticated being developed.
Treehouse has managed to make a 4 hour week work since everyone is working remotely, so that social aspect is not as prominent and consumes less time. For people who have kids spending time for the kids becomes more important than the social experience at work as it should. The 4 day work week all of a sudden makes sense since they have bundled those 3 hours / day of a work social time into one day of a kids time.
So what is it...32 or 60?
The only answer can be "it shouldn't matter!", if you work in an industry where you can just as easily work from home as work from your desk.
I am speculating, but I would think that most of the IT developers at Treehouse work well over 40 hours a week.
Employee culture is important but to be honest I only care about how well the founders are executing their original vision then all the yoga classes, free food, Friday's off, beer pong, maid service and other things companies are offering.
32 hours a week is nice for some but that doesn't always equate to marketplace monopolization.
Then again since Treehouse is competing with others this may not be their goal anyways.
Of course since I'm a cofounder I work pretty much 24/7 but such is life...
I remember one incident where a thursday meeting at a startup was canceled because a department head wanted to turn an already long weekend into a 4day holiday. I put my foot down. Fridays are not weekends. If they are, then thursdays become fridays and you'll start skipping them too. That meeting consisted of me in a suit, in an empty office, talking to two people via skype. I call that a victory because the meeting at least happened. (The truth is that all the low level employees on the first floor were there and working. They cannot afford to skip out on work.)
Casual is all well and good until it creates unpredictability and disorder. Contrary to popular myth, things actually get done in meetings. Not every decision can be made while scaling the in-office climbing wall. Some decisions require people sitting down at a table to hammer through a series of points.
Does that thing that happened last night on the server qualify as a breech? I don't care that tomorrow is a friday. Neither will your backers, nor the FBI, when they haul you in to explain why you couldn't be bothered to take a decision until after your ski weekend.
1. The bay believes that solofounders are a bad deal - mostly - because starting a company is a lot of work. And so it is - a lot of work!
2. Now here we have a handful of _startups_ that confess there's isn't enough work to keep everyone in the nimble team up on toes for even forty hours a week! This contradicts with 1.
Sure it means team happiness and all that. Fine.
3. For each _startup_ that has confessed situation at 2. there should be at least 'X' times the number of start_ups who do not accept this reality. I don't know what that number 'X' would be but let's take it 10.
Which means what - a bubble?
What sort of changes can be made to change people's viewpoint on hard work as a virtue?
And yet it never stops to wonder why these parasites keep recurring, or how they might use these very arguments to recur again, or what might be done about that.
"From the founding editor of The Idler, the celebrated magazine about the freedom and fine art of doing nothing, comes not simply a book, but an antidote to our work-obsessed culture. In How to Be Idle, Tom Hodgkinson presents his learned yet whimsical argument for a new universal standard of living: being happy doing nothing. He covers a whole spectrum of issues affecting the modern idlersleep, work, pleasure, relationshipswhile reflecting on the writing of such famous apologists for it as Oscar Wilde, Robert Louis Stevenson, and Nietzscheall of whom have admitted to doing their very best work in bed"
The Right To Be Lazy (1883) by Paul Lafarge
The Abolition of Work (1985) by Bob Black
1. Dismantling the disincentives to hiring more people for fewer hours each
2. Dismantle the "40 hours is full time" as a legal fence that prevents people from wanting to drop under it (sharp benefit cut offs instead of gradual phase outs)
Strongly disagree. I find one-way bindings and one-way data flow much easier to reason about. A little less boilerplate code is not worth mental overhead, cascading updates and hunting down the source of wrong data in my experience.
What is important is not updating the DOM from the code and instead describing it with a pure function. React, Cycle, Mithril, Mercury do it, and it's time we get used to this. This is the real timesaver, not two-way bindings.
`Object.observe` is the wrong way to approach this problem. If you own the data, why invent a complex approach to watch it, if you could update it in a centralized fashion in the first place? Here is a great presentation on that topic: http://markdalgleish.github.io/presentation-a-state-of-chang.... I strongly suggest you read it ("Space" to switch slides) if these ideas are still alien to you.
Even Angular is abandoning two-way bindings. http://victorsavkin.com/post/110170125256/change-detection-i...
I, for one, welcome our new immutable overlords.
"Abstraction is dangerous" is just fundamentally wrong. Abstraction is the only way we get anything done.
What you really mean to say is that bad abstractions are bad. But stated so clearly, it becomes obvious that it's a tautology. Well-designed abstractions that leak as little as possible are essential to everything we do.
This stuff matters, because instead of having stupid arguments over "how much" abstraction we want (which really boils down to 99 layers vs 100 layers) we should be debating exactly what abstractions we want.
Get rid of all the abstraction, local state, dependency injection, symbol management and so on. Take HTML/HTTP seriously and think about REST in terms of HTML rather than JSON.
Here's an image I tweeted trying to explain how to get there mentally:
Yes, it's a simple model. And no, it doesn't work for every app. But many apps would be infinitely simpler and more usable in a browser by using this approach, and almost all apps have some part of them that would be simpler to implement using it.
The primary complaint appears to be that abstraction eliminates your ability to operationally trace the meaning of a program. This is true, but sacrificing operational denotations only hurts if you replace it with nothing elseand abstractions of general purpose languages are almost always more interpretable than the operational denotation of the base language itself!
Of course, there are always places for poor abstractions. I am not talking about these. Abstractions which are intentionally opaque, have confusing action-at-a-distance, etc---you're bringing down the name of abstraction in general. "Leaky" is insufficiently demeaning.
A good abstraction will have its own semantics. These can be equational, denotational, operational, what-have-you but, essentially, these semantics must be easier/simpler/more relevant than the semantics of the base language they're embedded in. Otherwise why abstract?
So what does React give you? It gives you, more or less, a value-based compositional semantics. Components have some "living" nature (an operational semantics w.r.t. to state) but they're mostly defined by their static nature. Because you can build whole applications thinking only about the static, compositional nature of components you can take massive advantage of this abstraction.
Building the core and then using micro frameworks or components like react, jquery, etc leads to less walls as swapping is easier as time progresses.
You don't want to be caught high and dry stuck in years of monolithic to cleanup when the fad dies and at that point having abstracted away everything you need to know.
If the framework changes everything you do and abstracts core logic or the systems you are building doing things without you being aware, it might be easy to start 90% but there are gonna be problems and eventually walls and walls against you.
The only thing that should be monolithic and the base is programming languages and platforms. Everything else should be micro components or messaging.
One answer to this problem of opaqueness in abstractions is having a well defined denotational semantics. This makes it clear that something can work in one way & only one way (without the need to dive into library internals). I feel that Elm is doing a pretty good job of tackling this for GUIs and signals.
Nevertheless, I have a favor to ask any framework developer out there - please, make it disassemblable and usable piece by piece outside of framework.
OP was right - sometimes i find some aspect of framework nice, but more often than not it is monolith part of the whole framework, which as a whole I dislike.
ps: current combination it seems to fit my mind workflow is Backbone (models + collections) + Ractive.js (Views) + Machina.js (for routing and defining "controllers"/states.) Although I am looking to use something else besides Machina.js in next project, as I want to have hierarchy now. And since it is all loosely coupled, I can replace parts.
True statement. Of course, it's more or less true, depending on how much the abstraction you're using leaks. Few (if any) abstractions completely encapsulate complexity, almost all will leak. But there's a range. Some abstractions elegantly cover a modular portion of your problem space and do it so well you only rarely have to think about what's going on under the hood (and will even produce effective clues as to what's going wrong when something does go wrong). Some abstractions awkwardly cover only part of a modular portion of your problem space, require a high intellectual down payment to even start to use, have gotcha cases that chew up performance or even break things, and require continual attention to what's going on just to keep development going.
Most are probably in between.
I think this is what JWZ is talking about in his famous "now you have two problems" assessment of regular expressions. I don't read him as saying "regular expressions suck," I read him as saying anything but tools from the high end of the abstraction quality spectrum means now you have two problems: (1) the problem you started with (2) the problem of keeping the model/details of how the tool works in your head. Regular expressions are arguably in the (maybe high) middle of the spectrum -- they may not cover your case well (ahem, markup) and they can send your program's performance to hell or even halt it if you don't know what you're doing.
Now, they're also broadly useful enough in all kinds of development that the benefits go up with the costs and so they're probably worth investing in anyway, as part of a suite of other parsing tools/techniques. So I'm not bringing the topic up to bash them.
But to take us back to the topic, I might be bringing it up to question the ROI of popular JS frameworks, which, as far as I can tell, are generally not at the the high end of the abstraction quality spectrum, don't have the broad usefulness of regular expressions to recommend them, and may not even survive longer than a handful of years.
It's a bastard child of React and Backbone.
I think author except in 1,2 points didn't bother to take side with performance aspects.
I spent time today working in Clojurescript which wraps the Closure library. In the last month I have used Ember.js, Clojure with hiccup, and meteor.js. I really like all of these tools and frameworks. I used to use GWT a lot, and almost committed to Dart. So many good choices.
* I have bunch of helper functions (UI and non-UI). Each function define in its own file and independent (easy to unit test). Personal library like jQuery but not a jQuery replacement.
* App is route based. One route to many controllers. Each controller is a page/screen on mobile.
* There is only one model (API) that interface with 3rd party library. API layer talks to 3rd party library to get data or gets data from server directly, caches data, etc. Provides sync (Cached data) and async (Cached data or fresh from server) interface to controllers.
* There is a app class or I call it a page manager. Responsible for managing pages like ordering, loading, unloading etc (Kind of big and complex 200+ lines of logic).
- Decides which page to animate in which direction on mobile (Loading new page or going back).
- Order of pages (Back button)
- Passes events to its controllers
- Decides which pages to keep in DOM, and which to remove.
--- If you go from homepage to comments to profile page, all pages are in DOM.
--- When you go back to comments page from profile page, profile page will be destroyed and controller will be notified. Same happens when you go from comments to home page.
--- If you go to same comments page again, it will be loaded as a new page.
- Each controller may have multiple CSS and templates
- Controller uses its template to render
- Using sync API to get data to renders page.
- If sync API returns no data, renders empty page with loading, and makes async API call.
- Controller are idle when transitioning (animating) from one page to another on mobile. (Very important for smooth animation)
- Simple but fat controllers
- Controller handles events, UI logic
- Self cleaning so that browser can collect garbage when necessary
I package app using node/gulp. Anything that is not specific to page/app related, it becomes part of a helper library. Each app has its own model (Data layer), and controllers. I use micro templates, precompile using node for faster performance.
cough React Native cough
The technical paper: http://www.icas.org/ICAS_ARCHIVE/ICAS1998/PAPERS/182.PDF
The actions this system takes are drastic. Roll rates to 180 degrees/sec to get to wings-level, then a 5G pull-up. The pilot's helmet may be banged against the canopy. It's so drastic because flying 150 feet off the ground in mountainous terrain is normal procedure for fighters. If the system has to act to avoid a collision, that action has to be very aggressive.
Here's what it looks like to a pilot:
Here, the pilot puts the plane into an insane bank and, as he says, "goes to sleep" and releases the controls.
My father liked to attack ack-ack positions by diving vertically on them, as the gun crew obviously was reluctant to fire straight up. Of course, you gotta keep a real close eye on your altitude and airspeed doing that.
I don't know if it's just coincidental, but a lot of the examples seem to be concerned with keeping jets from flying into mountains, as opposed to flying into level ground, which was the first thing that came into mind when I saw "Ground Collision."
Maybe someone here can enlighten me as to why the systems, specifically TCAS and GPWS in modern civilian planes are only ever used to issue warnings/recommendations, but never take control? it would have at least prevented the berlingen mid air collision and probably some other CFIR incidents in the past years.
I thought about this for a while and couldn't come up with a really good reason.
It was a great book that almost feels like the pre-scrum manifesto applied to building aircraft.
- to avoid radar, you have to fly low - tree-top high- to save ammo, you have to shoot near ground-targets- can the software be fooled in mountainous terrain?- what about off-field landings?- Japan's most effective bombers were kamikaze, and American pilots also considered ramming other aircraft after their ammo ran out- if the software is wrong, does it roll inverted and pull down at 5G? how do you stop it?
Looks like that second-to-last sentence was inserted after an initial writing, since the last line refers again to the urgency to update. In my opinion, there's far too little discussion (just the one line) about the implications of what might have been exposed here. In other words, this sounds much more like evidence of a possible data breach than just a client security bug that is fixed with an updated version. Of course, that discussion/clarification should come directly from Box.
If information like S3 credentials was exposed, I assume Box's response was to immediately change all the relevant credentials (and be sure the new ones aren't exposed in later versions). If that's the case, then the client update itself probably isn't the critical thing to worry about at this point, right?
It's a bit like saying "oops, we accidentally pushed secrets to our public GitHub repo and didn't know about it until someone else pointed it out" and that person saying "quick, everyone pull down the latest revision that doesn't include the credentials."
Fairly widespread problem, which is almost inevitable given enough binary digging and reverse engineering work, unless you do real work to segregate the authentication process to a serverside PKI or something similar.
If the author comes across this, good work, nice writeup! However, if you're going to have a tl;dr section at all, you should put a brief description of the vulnerability in it. In this case, the vulnerability is simple enough that it can be briefly expressed in a tl;dr.
It would have been useful to check whether that 'supposed' is true and if so, how they fixed this. Worst-case, they did the easy thing and obfuscated the strings.
As someone chiefly interested in .js for it's 'functional curious' side, the new features in ES6 have me really excited.
Scene.org's core function is to act as an archive. A very large share of all demoscene productions, ever, are hosted on scene.org and its mirrors. It's been fullfilling this service for many many years now, and I don't think it's going to stop anytime soon. It's excellent for a community to be able to rely on such an excellent file host for such a long time.
For a better accessible and searchable database of demoscene productions, it's better to go to http://pouet.net. Don't be scared away by it's, well, "impressive" look and feel, it really is the central hub of the demoscene and the design is chiefly maintained because of nostalgic reasons.
Another more detailed archive of roughly the same productions is http://demozoo.org.
Which explains why he never told anyone about them, their ownership was unclear and while deeply significant to Armstrong the artifacts could easily have been confiscated by NASA and put in some dusty vault to rot away like the suits they wore mostly did before being rescued.
Interestingly, there is a new carabiner on the market whose locking mechanism is more like a button (http://www.rei.com/product/840193/black-diamond-magnetron-ro...): the difference being that the mechanism must be activated from both sides (via pinching the purple parts in the image), and has magnets forcing the carabiner into the "locked" state when not being held unlocked.
Also interesting is that the waist tether is adjustable. That could be a point of failure -- imagine floating off the end of your tether. Although I can't tell whether the waist tether is designed to attach astronauts to the spaceship, or just tools to astronauts. Howstuffworks.com implies it attaches astronauts to the spaceship (http://science.howstuffworks.com/spacewalk4.htm), but brighthub.com implies it's for tools (http://www.brighthub.com/science/space/articles/126178.aspx).
For an online text that covers similar stuff, see http://interactivepython.org/runestone/static/pythonds/index... .
The last "interview" chapter is about getting a job, not about CS itself.
A good starting spot for the topics in "computer science", at least at the undergrad level, is the ACM curriculum ( http://www.acm.org/education/CS2013-final-report.pdf ).
My CS degree involved image processing, graphics, operating systems, systems programming (low level programming), programming language theory, discrete math, linear algebra and statistics, just off the top of my head.
Interestingly programming is actually not a big part of a degree (again, as I understand it.) It takes many years to become a good programmer, and it would be a waste to dedicate an entire 4 year degree to just that.
> you are already familiar with Java or C++ syntax
not sure you will have too much success hitting your target demographic of "people who are ignorant of computer science, yet are experienced programmers"
Computer Science is a big field that spans many areas of programming, theory and research.
Strange - not a single citation/reference?
I think there's an error here:
string[1..3] = abcstring[1..1] =
Thinking about stacks, trees, and graphs can go a long way to build up learners' ability to simulate what the computer will do, e.g., getting the steps right for breadth first search in a graph is a rite of passage.
I do appreciate accessible text though - worth looking into.
I wonder if you could re-format your book in that manner?
This doesn't communicate that algorithms are fun. An algorithm book, should be like a Magicians show, really. With fun problems to apply the algorithms on.
I also note that there aren't any links for backreferences to topics, and that at least one topic is missing, heaps.
I am actually very fond of Robert M. Sedgewicks books (Second Edition), and Donald E. Knuths monumental accomplishment.Those books are fun, most books concerning algorithms are not as fun as they should be.
I am picky I guess, I want fun excercises, or presentations, but also accurate details, and minituous explanations.
The hack seems like it was possible because the engine was designed to run at arbitrary framerates, and the romhacker (ehw?) found framerate and vsync-related functions in the demo version of the game, which actually has debug symbols baked in!
Romhacking sounds really fun.
Regardless of the actual repository (though don't get me wrong, this is still super cool and I like seeing AppleScript in action since I feel like it's criminally underused), this kind of thing in a Readme always brings a grin to my face. Tinkering for tinkering's sake is the best.
If it's written in Obj-C, you can extract fully usable headers from it. If C++/C/etc., it will be difficult (but not impossible) to understand it.
After extracting the headers from that, you may well be able to use it instead of AppleScript.
Just thought I'd mention it here in case others are thinking of trying this out on Mountain Lion.
Edit: CamHenlin has already committed a fix for this problem. Nice work!
: https://weechat.org/: https://github.com/glowing-bear/glowing-bear: https://github.com/ubergeek42/weechat-android
I should label it "iMessage server".
Could someone host iMessage as a service without being sued into oblivion?
One of the issues with leader election in Raft is the potential for split votes. Raft tries I guard against this by randomizing election timeouts in order to discourage two candidates from requesting votes at the same time. What Ayende suggests, though, is to add a pre-vote stage wherein followers poll other nodes to determine whether they can even win an election prior to actually transitioning to candidates and starting a new election. This ensures that only up-to-date followers ever become candidates and thus prevents nodes that will never win an election from ever transitioning to candidates in the first place.
There are two conflicting implementations of a parallel utility. And from what I can tell, the GNU parallel utility is much more useful than the one in moreutils. Which meant that when I was 1) doing processing which benefited greatly from parallelization and 2) found that the moreutils version wasn't doing what I wanted nor could I figure out how to make it do so (compounded by confusion over online searched providing GNU parallels syntax which didn't work), I had to remove the entire moreutils set to install GNU parallels under Debian.
The two versions aren't even a candidate for /etc/alternatives resolution as the commandline syntax and behavior differs.
Either a name change or refactoring to a different package for the 'parallel' utility would avoid much of this.
And I'd really like to see numutils packaged.
Also: 'unsort': sort -R | --random-sort
(using GNU coreutils 8.23)
(I'm not familiar with a seed-based randomized sorting utility though.)
counts the time between occurrences of the given string on stdinstdin is consumed.output will be the times in floating point seconds, one per line
pee -> some_process | tee >(command_one) | tee >(command_two) [...] # This one might need a bit more magic with named pipes to consolidate the output without race conditions, since command_N will be executed in parallel. Or take a note from the chronic replacement below and use a temporary file to execute them serially. chronic -> TMPFILE=$(mktemp) some_process 2>&1 > $TMPFILE || cat $TMPFILE; rm $TMPFILE zrun -> command <(gunzip -c somefile)
$ echo hi |sponge y
$ echo hi > y?