The commute is exhausting on occasion he has even flown to Beijing, done experiments, and returned the same night. But it is worth it, says Courtine, because working with monkeys in China is less burdened by regulation than it is in Europe and the United States"
I personally know researchers who have to take similar measures to be able to do their research.
When we are happy as a society to slaughter billions of animals every year for food (usually keeping them in appalling conditions beforehand) I don't understand how we can justify the restrictions we put on scientists.
Animal research is something that we as a society are going to struggle with for many years into the future. I would like to be able to argue that it is for the greater good of the planet, but I don't think that I could put up a good argument that we are being proper stewards of the planet.
One of the major problems with human spinal lesions is the loss of control of the trunk of the body (ie the core). Humans have dozens of muscle groups that control balance and posture through minute movements. Without fine grain control over these muscles balance is going to be extremely difficult.
Also you don"t need to cut the spinal cord todo data analysis.
I saw this story on a canadian news show a couple days ago, a show that regularly does spots on human exploration of Mars. The stories are similar. The headline feel-good story is about the shiny new mars habitat or spacesuit someone is testing, but they haven;t figured out how to actually get it to mars. The result is the false impression that the dreamed future, colonizing Mars and curing paralysis, is closer than is true.
I personally want to just have an RSS reader that gives me the articles from the sites I freaking asked for. Not what some service thinks I'll like. Google, Pocket, and other sites are already doing a great job of thumbing through what I read and trying to sneak in their "suggestions": that's the kind of invasive data-combing that I want to avoid.
Maybe I'm not most RSS users in that regard, I'm not sure, haven't done the market research.
It feels to me that suggested sources leads to promoted sources leads to paid content.
About 90% of what I see is irrelevant, but I need to see it to filter out the relevant data. I'd pay a decent amount for a service that can tailor itself and deliver relevant results without much manual intervention. InoReader and some other RSS readers have decent filtering, but it's all manual.
So this excited me, but without a way to import my existing feeds it's largely useless to me.
Wish they posted some screenshots or static pages or something of the sort, so I could take a look and see if I like it more than stringer, which is the current solution I'm using.
There is 0% chance that I'm going to switch no matter how good your recommendation algorithm is if I can't also keep up with my current feeds.
My only criticism would be that the Getting Started page is definitely skewed towards a specific segment. Like, how is Sports not an option? Music? I understand that there are only 9 slots in your designs, but please choose some that are more representative of general interests, not just Palo Alto mid-20s coffee shop startup founders.
The only reasons I can think of to host my own RSS reader are to prevent third party profiling, editorial transparency, and future proofing against the provider shutting down or evolving the service in a way that I don't like. This provides none of those advantages.
Am I missing something?
2) Grouping of articles on the same topic. For example, grouping all articles on Angela Merkel's latest Brexit comments, so I can easily choose one and ignore the rest.
Grouping would make me an order of magnitude more efficient when reading RSS. I'd happily pay $100 for that feature.
3) Efficiently and automatically manage broken feeds: Automatically retry (over X hours or days), automatically look for a replacement feed, and then let me know which feeds need my attention.
- Secondary links (ie comments link for HNews and Lobsters) - Batch OPML import - Deploy to Heroku button - Follow suggestions (we're working on this) - Switching between feeds should be easier - Lightweight task queuing system for emails and discover endpoint - Keyboard shortcuts (vim style) - GraphQL style APIs so you have more flexibility for building your own mobile apps - Android & iOS apps - Support more sites (RSS data quality is pretty poor and often needs custom logic per site/feed) - Search article's you've read using Algolia - Folders/Groups - Sharing support (e.g. Buffer, Facebook, Twitter, etc.)
The thing I didn't liked is that, if it's supposed to be "learning" how comes I can't load new stories after I'm done with the initial set?
Further, nowadays, if you have a heart attack and no one makes an effort to resuscitate you, and there's someone who knows how to nearby, most people would argue that that person has forgone a moral obligation.
This is one of the arguments made for cryonics, in two ways. If in the future it's possible that we'll be able to revive a cryopreserved body, then you can't really consider someone dead if they're cryopreserved within a certain timeframe after their heart stops (just as today you can't consider someone dead within the ~5 minutes after their heart stops, because of CPR, etc). What's more, the people of the future will have a moral obligation to revive cryopreserved bodies, because not doing so would be the equivalent of not attempting CPR on someone who just had a heart attack. This is the response given when someone asks why anyone in the future would be bothered to go through the hassle of reviving cryopreserved bodies.
Food for thought.
EDIT: Not the specific link I was thinking about, but here are 2 of many others:
 This is a name used in science fiction stories to describe the machine that mechanically repairs tissue damage.
"He suggested using the newly invented stethoscope to listen for a heartbeat if the doctor didnt hear anything for two minutes, they could be safely buried."
"An electrical engineer from Brooklyn, New York, had been investigating why people die after theyve been electrocuted and wondered if the right voltage might also jolt them back to life."
"Starting in the 1950s, doctors across the globe began discovering that some of their patients, who they had previously considered only comatose, in fact had no brain activity at all."
"They had discovered the beating-heart cadavers, people whose bodies were alive though their brains were dead."
"In some cases, their hearts kept beating and their organs kept functioning for a further 14 years for one cadaver, this strange afterlife lasted two decades."
Now... If the brain were dead (by my understanding of a 'dead' organ - its cells have died), wouldn't the brain start either decomposing or being absorbed by the body? If that happens, it's a pretty clear indication that the brain is truly dead. But if it doesn't happen, doesn't that mean the brain cells are still alive (just not communicating for some reason)? And in that case (living cells with a blank EEG), couldn't there be a way to jump-start their communication, as was previously discovered with hearts?
I'm sure that I'm completely ignorant of some critical factor, and look forward to your thorough discussion of it...
I can't tell if this is an error, as if it should have been "[...] patient is the correct term" or if the neurologist is trying to be funny, or make a point. IMO the writer/editor should've omitted this quote or clarified its context and overall meaning to the article.
I'd love to see some documentation regarding he security, and some proof that keys are not exportable.
(I did try Panic back then -- I thought it was quite awkward)
For example, if the first comment is roughly "I read the linked article and disagree because of A, B, and C" then someone with an opposing viewpoint would probably reply to the existing first comment instead of creating a separate comment.
This doesn't mean that the second commenter's voice isn't heardit just means it all happens under the thread created by the first commenter.
I suspect HN is subject to the same "first comment effect," often to the detriment of later comments that might be more deserving of the top spot.
Does anyone here have ideas for fixing this?
Edit: It's probably related to the cutoff of 30 comments, but it's still not obvious to me...
Some of the best HN comments can come late and never make it to the top.
Global has slowly and consistently built an open source software culture and a lead in web tech for media in Brazil.
c'mon everyone knows genetic population is 30
(I kid I kid but that number has an interesting story https://statswithcats.wordpress.com/2010/07/11/30-samples-st... )
Surprising. Swift, a compiled, static language, is slower than python in this use case?
Next time I want to build a desktop app I'll have a go with this for definite.
Once I have been building Qt Widgets application in Python with PySide bindings. I needed to do some simple processing in several SLOTs and decided to use lambdas as an easy replacement. Kids, do not even think about attempting to do this at home, seriously.
I have run into some extremely evasive random crashes and stuff. I did not dig to the bottom of the issue, although my guess is that Python lambdas are executed in main (constructing) thread context and that causes race conditions.
My question: how do Golang's goroutines interact with native threads spawned by Qt?
I mean, yes, its statically linked but 52 million bytes for a window, a button and a pop-up?
Shouldn't this fit into 5-10 mb? Is the whole qt lib linked into that binary?
Last I saw, QT's GPL/LGPL license required you to open-source the code that was statically linked to it. That wasn't much of a problem for desktop platforms, as you could just package up all of QT into its own QT.DLL (or the equivalent) and link to it dynamically. But this isn't allowed on stuff like iOS, so you were required to buy a license for those.
It looks like a tiny Windows 95 on your phone. Tap targets way too small.
This is not really helpful, and destroys any chance at meaningful discussion.
(Also, last time you told me that rate limiting couldnt be undone, and I should just write an email whats that about now?)
Is there any real physical evidence that an electron can be both states at the same time, outside the mathematical framework of QM?
Serverless isn't saving me any of that pain. I still have to configure ACLs for everything, web folders in S3, ensure that my backend isn't hitting any concurrency or timeout limits, ensure that all of my routes are set up properly in the API gateway, that my DB queries are properly tuned and that someone is watching that DB...
All it's really buying me is not having to write Ansible scripts, but instead I have to write CloudFormation templates. Sure, I have to think about maintenance and troubleshooting less, but when I do have to do troubleshooting, I'm for a long, frustrating day.
As easy as it is to create VMs anymore, the serverless story is nowhere near as compelling as it should be. It makes the easy things easier, and the hard things really f'ing hard.
Concrete example: I wanted to just store the contents of a github webhook post in S3, after verifying the hashed secret. Should be a simple case of wiring together the API gateway to S3, right?
First bug: You can't test an API Gateway -> S3 connection if they are both in the same region. Known issue from back in 2014.
First hurdle: You can't pass the contents of a post to an auth hook in API gateway, just a single header value. That means I can't use the API Gateway authentication hook for this purpose; github creates a HMAC hash of the post contents.
By this point, I was wishing I had just set up a t2.tiny server running flask.
We use SWF to trigger over 50 separate lambda functions in processing. We've got some very nice internally developed tools to identify which functions are out of date and help with deployment. I'm just curious what else is available to handle DevOps tasks in a Serverless environment (i.e. deploying library updates, etc).
I'm currently working on a side project (which if successful can easily be a product in itself) to create a service that would be the Digital Ocean of FaaS (lots of talk - I understand).
What would be the core features that devs/users/managers are looking for? (something you downright expect for v1 that would compel you to put the service to use for development/production)
Currently some of the items top of my list (in no particular order) are:
* API development (like AWS's APIG)
* Support for more languages by default
* Ability for users to a custom defined language easily
* Creation of an opinionated framework / workflow for FaaS development (you don't need to do this if you don't want to...but FaaS first apps would be done better/easier this way)
* Rewriting common products (Wordpress, Ghost, Jekyll etc.) for to be served via FaaS.
Things I don't want to work on (or would prefer to not jump into):
* Storage service (S3 alternatives)
* Databases (there are so many out there...building a cloud is hard, building a database is even more so)
Who do I feel would pay for this?
* Front end & Mobile devs who don't want to complicate a backend
* Mobile devs who don't want to complicate a backend
* Enterprises maybe?
Content is stored in our public github repo, and we use TravisCI to build Jekyll pages on commit/merge, which are uploaded and served from S3.
Interactive functionality is provided via React in the front end, and API-Gateway/ AWS Lambda in the backend.
The API Gateway/Lambda provide various proxies to our various sandbox API environments, so Developers can quickly try our APIs directly on the site.
You don't need crypto. You just need a machine that prints out a human readable receipt that the voter can see but not alter, which then drops into a secure holding area on the machine.At the end of the day, you randomly select say 1% of all the machines and hand count all the ballots inside, making sure the counts and votes match. If they do, then you can be reasonably sure it wasn't tampered with, and if they don't match, then you can hand count all the paper ballots using the old system to verify the computer.
I like the idea of a county/precinct/district being the entity who signs the results of a vote. This would protect the anonymity of individual voters, since results are reported on a per county basis today anyways. And if a county is suspected of voter fraud, you could always add their public key to a blacklist..
Possible expansion topics:
- Handling a ballot with multiple separate races - Backup system for when the computers have gone down - Can it assist in any way with "voter suppression" issues
"And so long as the counties can protect their secret keys..."
From reading this article, it would seem that it satisfies the first 4 points, just not the 5th. Is that the main reason to have to use the blockchain, to prevent tampering from the inside?
1) idea that security is something you check off and be done with is dangerously wrong. Security must be continuous, must be updated, reviewed, etc.
2) idea that you can "encrypt" [secure] your entire life is ludacris and leads to many dangerous security misconceptions. You don't even have control of your entire life, let alone ability to secure it. Most the data on you is owned by others and not even available to you to secure. The world is not private or secure. Everyone needs to know and think about this when they are tweeting, sexting, talking shit about future president and then being surprised when SS comes to investigate.
3) idea that security is either on/off, a binary, that you can be secure or not. Is False and leads to extremely poor security choices, over/under securing. Nothing is secure. There is not such thing as SECURE. Things lie on a gradient of security from easy to break to impractically difficult. Things on the impractical to break technically end are still broken due to social engineering, externalities (power consumption of cpu), poor practices surrounding item, etc. Security is making the effort required to get an item greater than the value of getting the item.
Categorize your levels of paranoia appropriately.
I appreciate how practical these tips are and I hope people will follow them.
I have two quarrels with this:
> Andy Grove was a Hungarian refugee who escaped communism [... and] encourages us to be paranoid.
I'm pretty sure that Grove was referring to business strategy, not communications security.
> Congratulationsyou can now use the internet with peace of mind that its virtually impossible for you to be tracked.
Something I've seen over and over again is that Tor users tend to have a poor understanding of what Tor protects and doesn't protect. The original Tor paper said that Tor (or any technology of its kind) can't protect you against someone who can see both sides of the connection -- including just their timing. Sometimes, some adversaries can see both sides of a person's connection. As The Grugq and others have documented, Tor users like Eldo Kim and Jeremy Hammond were caught by law enforcement because someone was monitoring the home and university networks from which they connected to Tor and saw that they used Tor at exactly the same time or times as the suspects did. (In Hammond's case, recurrently, confirming law enforcement's hypothesis about his identity; in Kim's case, only once, but apparently he was the only person at the university who used Tor at that specific time.)
As law enforcement has actually identified Tor users in these cases, I think people need to understand that Tor is not magic and it protects certain things and not other things. In fact, I helped to make a chart about this a few years ago:
This chart was meant to show why using HTTPS is important when you use Tor, but it also points to other possible attacks (including an end-to-end timing correlation attack, represented in the chart by NSA observing the connection at two different places on the network) because many people in the picture know something about what the user is doing.
I've been a fan of Tor for many years, but I think we have to do a lot better at communicating about its limitations.
Don't use this email for anything else.
On a related note, I noticed that my Windows Phone displays text message notifications even when it's locked... So adding a PIN doesn't prevent an attacker from doing 2FA if they have access to my phone.
Btw, I'm interested to hear how well training with large one-hot encoded vectors scales. A paper someone pointed me to recently on HN suggested that it doesn't scale very well:
One-shot Learning with Memory-Augmented Neural Networks [https://arxiv.org/abs/1605.06065]
This got dropped during editing...Updating the post to make this more prominent.
Also, really well done on the site design. Love the graphics, font, layout and 'progress bar' animation at the top. Very nice UX overall.
I understand matrix multiplication but it seems that (some of) these matrix to vector calculations are actually trained by/as part of the neural net... but how exactly that works I can't figure out coming at it from articles like this.
I just wish I understood the rest of the article...
Couldn't resist; it's a Friday.
MartinFowler DDD blogs: http://martinfowler.com/tags/domain%20driven%20design.htmlBook: https://www.amazon.com/Domain-Driven-Design-Tackling-Complex...
I agree with the author that components are the true innovation of React, because it encourages reusable building blocks by default. Contrast with classic desktop and mobile UI toolkits which, while often also using or encouraging MVC, do not require the developer to not subdivide their own codebase into reusable widgets. Instead, they allow composing an entire screen (window, page, form, whatever) from the built-in widgets. Making a reusable widget is possible but extra work and therefore not done. In React, it's the only way to go.
This is awesome about React, but it has nothing to do with data flow architecture, which is what MVC is.
The mistake webdevelopers made for years was trying to shoehorn DHH's backend remix of MVC into the frontend, throwing away decades of UI building architecture knowledge. I'm happy the Facebook people rediscovered MVC and I'm even happy they gave it a new name (Flux) because MVC frankly has gotten way too many definitions.
But saying that Flux/Redux killed MVC is like saying Clojure killed Lisp.
The point is to not munge all these things into one monolithic horrible grim mess. As long as you are separating the concerns of showing something to a user, allowing them to control it and backing it all with potentially remote service who cares what the pattern is precisely called? Its still flavours of this original desire that we named MVC.
All these presenter/unidirectional patterns are _just_ the underlying desire of MVC and the only reason that people seem to talk about how "MVC isn't right" or "is dead" is because they've followed MVC like dogma instead of just a guideline of separating your presentation, control and service logic. I had exactly this debate back when everyone was talking about MVP as if it was some revolutionary new thing. Its not, its all the same thing and GOF was never supposed to be a template for software but a way of talking about specific ideas that architects could then riff on. They're chords, not one specific tune that you _must_ play in a very specific way.
"Unidirectional architecture" is a weird name for what is a fairly standard abstraction. Every front-end at the top level is:
f(my_entire_state, some_event) -> my_entire_state'
Front-end development has been plagued by unclear patterns w/ weird names (MVC, MVVM, MXYZ...) since forever; everytime the patterns are criticized you hear "you did not understand it"; and new names keep popping up. It seems the industry is stuck remixing reasoning around nouns, and is unable to step back and reason around data.
BONUS:Sprinkle some CSP to get elegant concurrency, throw away callback-hell.Sprinkle some React to get fast DOM manipulation, throw away Flux (heresy!) - it has way too many names to worry about (action creator, action, dispatcher, callbacks, stores, store events, views) and encourages some bad practices around use of stores.
My $ 0.02
I'm still not completely convinced by React, and I may very well be wrong, but the same skepticism that sometimes makes me feel out of touch, also provides some sanity in this madness.
For some reason that I can't quite point out yet, Vue.js feels like the nicest one yet, though I've only played with it briefly.
DOMStream = view(model(intent(DOM))) DOMStream.subscribe(render)
model() takes streams of interactions, wires them up with data retrieval and mutation streams and returns these data-streams
view() takes the data-streams and uses them to create DOM mutations-stream, which it returns.
The nice thing is that these observable streams are really nice to filter, map, debounce etc. Also, it helps with fast realtime data stuff, because you can easily wire up these fast streams with a tiny part of your app and the rest of it won't even notice (which is a bit ugly if you got a big state-tree that represents your whole app state).
The not so nice thing is, that controlling them completely declaratively has a steep learning curve.
I can guarantee that MVC is still used on the fronted as on the backend. People are still generating value from MVC apps.
Perhaps it's not held up as the holy grail, since the new holy grail is pronouncing it dead. Perhaps it is dying. Dead it is often not.
After building out some front-end MVC work, I can see the value in Flux architecture and React components. I'd have a hard time justifying a rewrite though. It's still very much alive there for me. That's also the case for many others.
Model = the state, Controller = the update function changing the state based on action. View = declarative, need to send actions to change the state. Gets redraw on state change.
The idea of MVC, as far as I'm concerned, is to separate the View (Declarative as much as possible), the State (Just data, no logic) and the Controller (Business logic receiving commands/actions/called/whatever which changes the states and let the view knows that it needs to update).
Is that pattern dead? Far from it.
I started calling it Model-View-Store recently as I think that best describes it. There are a few unique things here that I think are valuable.
Starting with Models: Models all return observable values. So if I query for a single record I get back an observable object, or a list, observable array. I define `@query` decorators on the models to set up these queries. Model's include prop validation, normalization, consistent operation names, and more.
Views come in two types: application and presentational. App views are all decorated to automatically react to mobx observables, so you can literally have a Model `Car` and in a view call your query `Car.latest()`, and your view will trigger the query and react to it accordingly. One line model <-> view connections!
Then you have Stores: they are just logical containers for views. Any time a view needs to do more than some very simple logic for a view, you can attach a mobx store to it with very little code. Stores also can manage the data from the Model (and because the Model returns observables, this is usually a one-liner). But they don't have to. Stores are located side-by-side with the views they control (and are be passed down to sub views when needed).
I've been working on this system for a bit now along with our startup and we've been able to deliver some pretty incredible stuff very quickly. Having a consistent model system is crucial, I can't imagine programming without having prop validation and a single place to look for my model queries.
Going to be releasing pieces of it over coming weeks and hopefully a full stack example thats really legit soon.
When I was growing up coding native UI apps, MVC was all about front-end. UI toolkits were traditionally MVC or M(V+C) going all the way back to SmallTalk, and "server-side" typically meant apps without "V" or a "V" that was so small and hardcoded in without separation...
Other abstractions are designed similarly to allow tests against pieces of the internal API in total isolation, though separated on different lines. Maybe you have transient state stored in view models while persisted state is stored in models. The isolation only helps as size grows, and it can keep size under control mostly by having a good data model for what each component / layer / aspect actually does.
It makes it easier to rewire everything when you start with a switchboard. UI changes are either "Oh my god we're going to have to re-write so much stuff if we do it that way!" and you wind up not making drastic UI changes or "Sure, we can make that thing tickle the controller instead of that other thing."
It espouses a (I believe?) slightly more traditional MVC interpretation where your Controllers are usually extremely light and in most cases completely optional. It encourages a MVVMC (Model View View-Model Controller) approach to encapsulate view-state.
OP is painting with an overly broad brush: Angular is not representative of all client-side MVC. I maintain an app where the view handles the browser events - as it should, and the only data that gets passed between the view and the controllers are the (business) models.
You have Model, View, ViewModel, and whatever auxiliary libs those need.
That being said, I don't know how to feel about Angular 2's approach to components. Decorators are useful but can diminish the benefits of component based architecture when misused.
Saying MVC is dead is like saying "CPU" is dead, no it's not, but it will always keep on improving.
I'm a longtime user of BeautifulSoup.
HN looks good: http://www.sitetruth.com/fcgi/viewer.fcgi?url=https://news.y...
AFL-CIO, the site used in the article, looks great: http://www.sitetruth.com/fcgi/viewer.fcgi?url=aflcio.org
Twitter's images disappear: http://www.sitetruth.com/fcgi/viewer.fcgi?url=https://www.tw...
Adobe's formatting disappears: http://www.sitetruth.com/fcgi/viewer.fcgi?url=https://www.ad...
Intel complains about the browser but looks OK: http://www.sitetruth.com/fcgi/viewer.fcgi?url=intel.com
Grubhub gives us nothing as plain HTML: http://www.sitetruth.com/fcgi/viewer.fcgi?url=grubhub.com
Same for Doordash: http://www.sitetruth.com/fcgi/viewer.fcgi?url=doordash.com
(No scraping restaurant menus with BeautifulSoup.)
Cool stuff in pure CSS works fine: http://www.sitetruth.com/fcgi/viewer.fcgi?url=css3.bradshawe...
First, let's not rely too heavily the analogies with drugs or prostitution. The differences between CP and drugs / prostitution are too large to ignore anyway.
CP consumers are often producers as well. That's a factyou want CP, so you make some yourself and swap it with others to get more. This isn't universal but it's common enough that you should know about it. So the visitors to the CP web site are not all just consumers of CP but many of them are producers as well. This is relevant because you have to weigh the damage of distributing CP against the benefit of catching people who produce CP. People have stated that distribution revictimizes the children, but I would weigh that against the ability to catch people who were either producing their own or at least supporting other producers of CP.
So the FBI discovers this server, operates it for less than 30 days with a Tor exploit, and catches 200 people using the site. Yes, the FBI was complicit in the distribution of CP, but rephrased as a trolley car problem, this is basically like not pulling the lever, allowing the distribution to continue for a short time, and using that to catch 200 consumersand how many of them are producers? You can pull the lever now and stop the distribution of CP, or you can let the trolley barrel down the tracks for a short time and save all these people somewhere else.
(People are saying that the exploit may have done damage to other police investigations from other countriesI don't see any evidence that the exploit damaged the computer, merely that it leaked information about the computer.)
1. The crime that utterly dwarfs all others is involving children in the making of child porn.
2. After that, the crimes that dwarf all the rest are those that provide financial or practical support to child porn makers. Consuming child porn is generally regarded as one of those, and I'm fine with that categorization.
3. I'm sorry, but violating a victim's theoretical privacy by distributing the images a little further doesn't seem to be nearly as big a deal as helping to prevent the next live video of child porn from being made.
I'm usually regarded as being pro-privacy, but privacy is not something to be a rabid extremist about. Preventing physical sexual abuse of children, on the other hand, is a fine area for extremism.
From a moral point of view, Child pornography is de-ontologically wrong. Nothing can justify its existence. Even if such a sting managed to shut down the entire industry, it would be moot to attempt to argue for its moral goodness in consequentialist terms.
The FBI could have used other means to establish criminal intent in the visitors to the websites along with the fact that they had used Tor to search out and visit those websites in the first place. They could have made prospective viewers engage in a series of incriminating acts such as requiring them follow a series of links with the promise of finding the material, or making them refresh the page. There was no need to provide the actual offensive material in order to make a solid case.
> On August 4, all the sites hosted by Freedom Hosting some with no connection to child porn began serving an error message with hidden code embedded in the page. Security researchers dissected the code and found it exploited a security hole in Firefox to identify users of the Tor Browser Bundle [https://www.wired.com/2013/09/freedom-hosting-fbi/]
However, as far as we know, unlike the more recent Playpen thing, in the Freedom hosting case the FBI did not actually serve child pornography, they just displayed an error message. I don't see anything in this article that suggests otherwise.
I'm not sure whether this is OK or not.
The idea was |Stealth VM| --> |Tor router VM| --> |Virtual Box NAT|
The Tor router VM was running redsocks to route all TCP traffic through tor's socks proxy interface. The stealth VM also used tor's DNS service.
That way, even if the stealth VM is compromised, it can't access the internet directly.
So, is getting someone arrested as easy as spoofing their network information and visiting those sites? I can already imagine trolls using this to have people swatted.
So they seized an onion hosting provider that had 23 cp sites, they ran those sites for a few weeks, then shut them down.
Keep in mind, you can't just pause the site and expect your targets not to notice, they had to actively maintain the site (and consider what that means) to keep their targets coming back. It's disgusting and disturbing. And if it's what we know about it, it's also just the tip of the iceberg.
At least with Fast & Furious I think it was real criminals running the guns and just a failure to intervene. I think a failure to intervene here would be seen as unacceptable as well. But here we have way more than failure to intervene, they effectively provided the guns and helped run them across the border.
Time to charge the FBI with aiding and abetting. Period. Equal treatment under the law. Period.
That's quite the exploit.
Given that 95% of people in the world are not from US, how many visitors were police officers from other countries, conducting their own investigation?
Two Virtual Machines, the one you actually use for browsing and stuff only connects through the gateway virtual machine.
If an exploit breaks out the firefox skin, it is just in the host VM, if it somehow breaks out of the host VM it is in the gateway VM.
We could keep going down possibilities, but we are far removed from attack vectors that actually exist.
Federal jurisdiction is implicated if the child pornography offense occurred in interstate or foreign commerce. This includes, for example, using the U.S. Mails or common carriers to transport child pornography across state or international borders. Additionally, federal jurisdiction almost always applies when the Internet is used to commit a child pornography violation. Even if the child pornography image itself did not traveled across state or international borders, federal law may be implicated if the materials, such as the computer used to download the image or the CD Rom used to store the image, originated or previously traveled in interstate or foreign commerce.
Theoretically, would a general citizen be exempt from the ban if he manufactured his own CD-ROMs, and his own CPUs in-state?
It might be illegal for them to operate the sites for extended periods of time. It doesn't seem illegal for them to deploy malware as part of an investigation. I'm looking at (f) here:
So the worst that could happen is that the evidence gets thrown out. If they weren't going to otherwise be able to nab the person, the worst that could happen is they lose the case.
The big change is the proximity to death, he said. I am a tidy kind of guy. I like to tie up the strings if I can. If I cant, also, thats O.K. But my natural thrust is to finish things that Ive begun.
For some odd reason, he went on, I have all my marbles, so far. I have many resources, some cultivated on a personal level, but circumstantial, too: my daughter and her children live downstairs, and my son lives two blocks down the street. So I am extremely blessed. I have an assistant who is devoted and skillful. I have a friend like Bob and another friend or two who make my life very rich. So in a certain sense Ive never had it better. . . . At a certain point, if you still have your marbles and are not faced with serious financial challenges, you have a chance to put your house in order. Its a clich, but its underestimated as an analgesic on all levels. Putting your house in order, if you can do it, is one of the most comforting activities, and the benefits of it are incalculable. 
The quote that stood out the most for me:
"In a pursuit like rock n roll, which is entirely devoted to redemption, Cohens ideas were not only old but radical. His peers all insisted that salvation was at hand. To go to a Doors concert was to stare at the lithe messiah undressing on stage and believe that it was entirely possible to break on through to the other side. To see Cohen play was to gawk at an aging Jew telling you that life was hard and laced with sorrow but that if we love each other and fuck one another and have the mad courage to laugh even when the sun is clearly setting, well be just all right. To borrow a metaphor from a field never too far from Cohens heart, theology, Morrison, Hendrix, Joplin, and the rest were all good Christians, and they set themselves up as the redeemers who had to die for the sins of their fans. Cohen was a Jew, and like Jews he believed that salvation was nothing more than a lot of hard work and a small but sustainable reward."
Back in July he wrote to a dying Marianne Ihlen, "Know that I am so close behind you that if you stretch out your hand, I think you can reach mine."
I have cried more tears listening to Leonard Cohen than all the other tears I've cried combined, his music, his words, his poems have always resonated deeply within me. He truly is my favourite artist. We listened to him daily in my dad's house and I grew to find an incredibly amount of peace in his voice. Love the HN community seems to like him as much. rest well sir.
2016 - what a year.
I always wanted to meet him and now, I'll never have the chance.
For years to come you will recallthe music's death, the soldier's fall,and your songs salute them both.So: Hallelujah!
Hallelujah! Hallelujah!Hallelujah! Hallelujah!
The first Cohen song that I ever heard was "Everybody Knows", in Pump Up the Volume.
If you want a lover I'll do anything you ask me to And if you want another kind of love I'll wear a mask for you If you want a partner, take my hand, or If you want to strike me down in anger Here I stand I'm your man
I'd really like to see McGill, our common alma mater, commemorate him in some way.
His songs are dark and poetic and really keep you entranced. I'm glad he released his last effort (Leaving the Table is a great one for the occasion) and seemed totally at peace in his New Yorker feature.
Wasn't a fan of hers until...https://youtu.be/ikdLBQACC74
> While never abandoning Judaism, the Sabbath-observing songwriter attributed Buddhism to curbing the depressive episodes that had always plagued him.
They're ambiguous so that helps.
"But let's not talk of love or chains and things we can't untie, your eyes are soft with sorrow, Hey, that's no way to say goodbye."
RIP, Leonard Cohen
I cant be sure, but back in 2008 when he played "Democracy is coming to the USA" he seemed to be delighted that Obama won. IMHO this part of the song is more appropriate for Leonard's last days on earth: "I love the country but I can't stand the scene".
edit: removed comment about the president elect
Like a bird on a wire Like a drunk in a midnight choir I have tried, in my way, to be free
I dont get why people are so emotional when famous artists die. Posting on facebook and whatnot..We werent personal friends with them, so it wont affect our lives in any way. Their works are still as available as ever, and still as great as ever. We can still listen to their music every day.
If they died old then they've had a good run to make a good body of great work that can be their direct legacy for hundreds of years. Few people achieve that.
A recent profile that I greatly enjoyed:
Which was quickly followed by:
I was fighting with temptationBut I didn't want to winA man like me don't like to seeTemptation caving in
It seems like so many of my favorite musicians and songwriters have passed in the last few years and its a struggle to figure out why. I'm a millenial with a wide taste in music from the early 20th century blues to contemporary EDM but it seems like the musicians whose talent you could just sense with every note and lyric are rapidly disappearing. I should be too young for this kind of cynicism but its an easy trap to fall into when comparing Dylan, Bowie, or Cohen to some song on the pop charts or an artist in the overwhelming field of independent musicians.
It's a sad day but I can't help but marvel at the universe. It is a kind of unique, rare beauty when a life-long artist like Bowie or Cohen close out their final chapter by releasing an album within weeks of their death.
He looks so much like Adam Sandler
One of my favorites is "Everybody Knows": https://www.youtube.com/watch?v=Gxd23UVID7k
You will be missed.
So long, Leonard.
There is a crack, a crack in everything.
That how the light comes in.
At least he lived the most graceful life. Having only ever known him in the last 20 years, it seems as if he started as an old man, and died young.
--You will be missed but never forgotten.
I'll be speaking to you sweetly from a window in the Tower of Song"
One of the greats.
Give me a Leonard Cohen afterworld So I can sigh eternally
Leonard's music had an uncanny sense of timing, both musical and cultural. He referenced the external, political world, indirectly - not through selfishly inward bullshit, like many of his contemporaries, but by sifting it through relationships with others and his relationship to the divine.
As I am writing this, the next article in hackernews is about Peter Thiel and his ascension to whatever office he is seeking in Trump's cabinet. His views on the damage women and minorities have done to Libertarianism (whatever that is), and how democracy is shit are well known, and I will let you judge how Palantir has benefited humanity.
The thing that gets me is his straight faced desire for immortality. Note that he doesn't wish for immortality for someone who is great, he wishes it for himself.
RIP Leonard. You already are immortal.
Yes, Python has a pretty dirty history, with many people choosing to stick to the Python 2.7 that they knew and loved. And yes, commercial software tends to move waaay slower than the wider community (many banks are still running COBOL). If you're focused on making money and pleasing clients then "it worked for us before" is always going to be the strongest argument.
Major players in the Python eco system have pledged to move away from Python 2 , and if we had non pie-chart visualisations, I'm sure we'd see huge trends towards Python 3 in the last year. Even slow-moving corporations are starting to use Python3. Yes, MacOS defaulting to Python2 is still a problem, but Ubuntu switching to default Python3 is already a huge step to get companies to move forwards.
There's also the sentiment of "I don't know if [insert module here] will be supported", which has become mostly baseless fear, but people still think Py3 support is lacking (when it's not!).
 https://python3wos.appspot.com/ -- also, take notice how 9 of this guys come from here 
Edit: when I posted this comment, the link was titled "Python 3 largely ignored" (not the article per se, but the submitted link had been titled that way). It has been changed, but this was a bit of important context for my comment.
The only thing I'm dealing with which is still not Python 3 compatible is AWS Lambda, which only offers 2.7 right now. Supposedly Amazon is working on 3.x but... come on.
That's why Python 2.7 will be around and maintain dominance for a while.
When the latest stable releases of the primary environments are defaulting to something you have to have very good reasons why not to use the default (even when it's easy to change the default).
Administrators (in the old school not the new "everything in containers") don't like using non-defaults.
Recklessly breaking backwards compatibility without a smooth migration plan is hostile to developers. Although it's not a programming language, this is something that I think React has gotten right with their current versioning scheme .
I find it funny how a small group of people thinks they know better than the outer community, to the point that they feel like they should have a say in what thousands of businesses use to run successful code.
More than this, I would argue that most people using Python 3 are those new to the language. This is only from personal experience, so it's really just anecdote.
PS: In a kind of jesty side-note, I know the general argument is that "python 2 was broken", but really, how broken can something be when thousands of businesses depend on it and more than that, choose to keep using it when a "better" alternative comes about?
I dont want to add a dependency for end users when my current code works fine with the built-in Python at 2.x. Nor do I want to bloat the size of downloads to embed something like an entire interpreter binary. (Swift has the same issue; there is currently no way to refer to it externally, as it is in flux; I do not want to embed loads of Swift libraries in every build so I will wait to migrate until they have stabilized binaries and made them available from a common root.)
Yes, it breaks a lot of things, but it's totally worth it for the performance gains.
Py3k solves problems partially like Unicode or async/await but it's a non-issue for skilled python2 developers.
People like incentives to upgrade. Period.
Right now in my life the major things stopping me from using python 3 for all the things are.
* Ansible * Two large in-house developed apps taken over from previous devs * Debian Wheezy
Other than that I try to use python 3 as much as possible. I know a lot of the workaround for getting pyvenv working on various distros for example.
I also know about a lot of alternative libs like ldap3, dnspython3 and more.
Though all it takes is one dependency to throw a wrench in the plans, the major projects are Py3 now.
Though porting large python 2 projects is still a huuuuuge pain. This is more a result of python 2 badness than python 3. But a lot of work.
Point is, we're a hybrid shop that does all first pass services in Python 2.7 and then move them to Go when they become suffiently trafficked and/or critical.
So for this feature alone, I'm all in on Python 3.
Teachers and kids are already ahead of the game.
When today's kids graduate they'll view Python2.7 in perhaps the same way I view Delphi or VB6 ("you're still using that..?" etc,).
Ecosystem has moved far enough its just a matter of time when big frameworks etc. deprecate python 2.7 and this mostly happen around the real EOL of it.
I suppose the sample set (projects using a tool like Semaphore) is likely to be biased towards more forwards-looking teams, but still.
CPython is so slow, they really should start taking performance more seriously.
How many of the projects support both?
> [MSE] doesnt have: parental control, built-in VPN, webcam protection, password manager, backups, exploit protection, protection for online banking and online shopping, proactive protection against future threats and dozens scores hundreds of other features which are all useful in providing maximum protection and a better user experience
That's exactly what I like about it. Stuff the "user experience": I don't want an AV product that tries to run my life for me. (I don't want Windows 10 to do it either, which is why I tried it for less than a week and went back to Windows 7.) AV products are bloated, difficult to use and always in your face when they should just silently remove viruses. Which is what MSE does for me.
Pretty much all of my "family tech support" is related to the AV doing something stupid like auto-deleting cookies or flashing up big scary messages for something trivial.
However Windows Defender seems to be good for me on Win10 - it just sits there out of the way, I don't even know its running. I LIKE the fact it doesn't have "online protection" or password managers or parental controls or whatever. It feels lightweight and does not cause everything to become 3x or 4x (or worse!) slower like every other AV software I've encountered
Whenever I go to perform family tech support I remove any random AV software they've been tricked into installing and just leave Windows Defender and that usually solves the issues (obviously making sure they are up-to-date on patches & still using 2FA)
I had to install Kaspersky on my main laptop since some VPN software imposed a policy that it installed and up-to-date to connect to a contractor's secured network. It was absolutely terrible. It killed my battery, slowed my machine, killed my TCP stack at one point, interfered all the time, and became generally unbearable. It frustrated me so much, I now do all network operations via a secured VM to avoid the Kaspersky curse on my main work machine.
When I was doing a lot of MacOSX kernel / driver work 8-10 years ago and keeping up with all the darwin lists, we'd get tons of questions from A/V devs porting their software from Windows to Mac. There were all kinds of bad questions. The worst one I remember is somebody asking why they were not allowed to hold a kernel mutex across notifying a kernel-space A/V deamon & waiting for it to respond (deadlock?).
After seeing multiple questions like this from these folks, I resolved to never run a 3rd party A/V suite again, and have run nothing but vendor provided A/V.
A friend of mine runs a 3-person software company making desktop Windows software. Nothing terribly exciting, think - a ToDo list or similar. They put nothing in the kernel, stick to documented API, make no deep tie-ins into the system (e.g. Windows Explorer extensions). Just a perfectly simple standalone piece of software with minimal dependencies that can run even on XP.
Not two months ago they started getting reports that the software was disappearing from users' machines. The Start menu icon was still there, as was the Uninstall entry, but the EXE was nowhere to be found. Naturally they thought of the antiviruses, but there was no pattern. Fast forward two weeks and the only commonality between all reports was a freshly installed Windows 10 update. The update silently wiped their software off. And to understand why that happened or to file a "false positive" report with Microsoft, the only option was to cough up few hundred dollars for opening a "priority support" ticket with them. Not everyone was affected, just a fraction of a percent. You could still reinstall the software and Windows won't make a peep or complain in any way.
While it made very little sense, it still clearly showed that users were no longer in control of their machines. Moreover, Microsoft outright lied when they said "all your files and apps will remain where they are" while installing an update.
So it's not just about loosing control over your own computer, but it's also about being treated like a sheep that Microsoft owes no explanations to and can do what the hell it wants. I sure hope Kaspersky Labs will have enough rage, funds and patience to drag Microsoft through courts and whip it back in place.
The idea that Kaspersky is somehow radically better than other AV vendors is a joke. Sure, some of them are comically bad, but none of that are that good. "Good enough" is often good enough.
I'm a fan of Kaspersky's research. AV isn't one of the areas where people need to be spending their time, though. I don't know how you could say AV works with a straight face.
ALSO: MS isn't a monopoly anymore.
1. Defender is not the best AV out there from a strict efficiency perspective (IMHO, Defender is good enough for most people and is quiet enough & bloat less enough compared to a lot of the competition).
2. Killing the competition in the specific AV domain is bad for security (IMHO, perfectly valid point).
3. MS is globally trying to kill any competition by abusing its dominating position (IMHO, another perfectly valid point).
2 & 3, while absolutely true, are shadowed by 1 which is a very questionnabe point.
As messy as av/fw are on Windows 10, let's not forget how things were before in the bad old days; security products were sometimes as bad as the malware they claimed to protect you from. Remember when you helped family and friends and how Norton was so difficult to remove it required a dedicated removal tool? Remember the countless of cleaners that used all kinds of scummy advertising techniques to trick users into installing them, often decreasing performance and safety?
As the "computer guy" for a lot of people, I'm glad that AV+FW are included in Windows 10. I am, however, disappointed how sub par they perform and how user hostile they are.
On Windows 10, the firewall is completely opaque and Microsoft decided to remove the firewall icon from the tray. So users naturally don't know if it's installed or not or what it's doing. Also, it's buggy as hell because on more than one computer I've had to keep resetting it to defaults simply because it would regularly stop ALL outgoing connections. Took some time to figure that one out and for most casual users that would have been impossible to solve, especially since there is no freaking firewall icon to click on anymore.
The antivirus has a more visible and sane presence but performs poorly in the independent AV tests. For some reason it changes names more often than a porn star, further confusing users. The blog post fails to mention Microsoft Defender, the fifth incarnation of the AV on Windows 10, so there are five different AV that Microsoft offers/has offered.
Microsoft needs to improve the quality of their built-in security products, both how successful they are at protecting users but also the overall usability experience.
Independent of that, running Kaspersky means installing Kaspersky's root kit. That's another low level vulnerability in addition to Microsoft's root privilege. It's simply more attack surface. Fully utilizing Kaspersky means sending telemetry to Kaspersky just as fully utilizing Microsoft's product means sending telemetry to Microsoft. I've no reason to believe Kaspersky less likely to be compromised than Microsoft.
To put it another way, Kaspersky's business, like many in the Windows ecosystem is to AdWords or bloatware their way to rents extraction while free alternatives exist. I'm ok with Microsoft making that model obsolescent and Kaspersky adapting or dying because Kaspersky's argument isn't that it provides significantly better anti-virus protection.
IT IS NOT TRYING TO TRICK ME INTO USING MSE.
Also, back in the day I had an HP laptop with an AMD Duron processor, and it came with Symantec AV. I had overtemp shutdowns. I diagnosed that the AV was using most of the CPU cycles by far. So I researched the providers and somehow Nod32 came out on top across two or three different AV shootouts. I replaced Symantec with Nod32 and the laptop ran so much better. After that I only ran bundled AV on new machines until I could get around to installing Nod32. Nod32 continues to behave appropriately.
On the machine that runs MSE instead of Nod32, there was a different application chewing up the CPU cycles: The HP support assistant.
If MS really wanted to make system security an even playing field where vendors could actually be effective, they'd make it modular (like Linux's LSM) so that admins could easily swap out security solutions without busting the system (slow, bloated, ineffective, etc.).
Vendors are a large part of the problem. They want more money, more often and in many cases really harm performance and do little to protect the system.
My father has three freaking antivirus/antimalware solutions installed. Maybe defender could be better, but if it reduces the market share of the nortons, comodos etc then I'm all for it.
I ran into this when the Windows 10 Anniversary Update rolled out. In my case the program Microsoft uninstalled was a Start Menu replacement, so I didn't actually have a functional start menu for several hours after the upgrade until I got the updated version of the 3rd-party program installed.
This left me shocked, dumbfounded, speechless, and furious. Everything I've observed over the last 20 years says Microsoft honours backwards compatibility above all else. Raymond Chen has great blog posts about the huge efforts they used to go through. My understanding is that's why businesses have stuck with Windows; it'll keep running their 10, 15 ,20 year old legacy VB line-of-business apps even on their newest OS. Apparently Microsoft has now decided to throw out backwards compatibility? I don't understand this decision.
Right after installing it I noticed that I MITMed myself with their "Web Protection" feature. To show green check marks next to my Google search results this "security" software intercepts my TLS traffic and alters it without my consent.At least Microsoft's solution isn't that desperate to make itself noticed even at the expense of my network stack's integrity.
This is my main issue with the "security" industry for Windows. To justify their existence they have to remind their paying users all the time about their involvement and sometimes they use really stupid and dangerous methods to achieve this.
"A component of the operating system has expired."
and I was unable to boot any further. still am not. had to turn back my BIOS clock a month in order to "unlock it".
needless to say, a planned install of linux is on the way. I've had enough.
IANAL, but that seems at the least to be a bloody annoying action, and at the most, anti-competitive as well as anti-consumer.
This has been going on since days of DOS, like 35+ years.
"MS-DOS also grew by incorporating, by direct licensing or feature duplicating, the functionality of tools and utilities developed by independent companies, such as Norton Utilities, PC Tools (Microsoft Anti-Virus), QEMM expanded memory manager, Stacker disk compression, and others."
This is Microsoft business success 101.
Yesterday I fired up Windows 10 in a VM on my MacBook to get some development work done, only to find Windows go straight into installing updates while I'm on battery in a cafe & without my power cord. (But it insists "Don't Turn Off Your PC".) 90 minutes later (!) Windows finally launched... just as I had to run for my train home. I literally couldn't do my work that afternoon, all because of Windows.
I can't understand the need for bloatware aka "anti-virus", if you take the time to educate the users and train him to stop clicking and installing whatever pops up in their screen then they can pretty much rely on MSE and have a clear mind.
Obviously MSE might not detect EVERYTHING but basic education on how to treat spam/advertisement/phishing goes a long way.
The reality is that an organization as big and as talented as Microsoft could, if they put their mind to it, develop and release a software product in virtually any market covered by their ISVs, and unless it is really terrible, or the third-party tool is really good, displace it.
I have not yet found an Antivirus software which can truly educates the user - there are wonderful opportunities in there for the right kind of company/product. Proactive solutions beat reactive solutions hands down. Like they say "Stitch in time saves nine"
And answers: "Of course, the cybercriminals!"
What he doesn't say: one of the greatest cybercriminal is the American Government.
However I think the point that having one monopoly AV decreases security because the bad guys can adapt to it is at least not as clear cut as it seems. Especially compared to the scenario of someone having multiple AV programs installed. AV programs themselves are excellent attack vectors, especially for the more skilled attackers so reducing the number has at least some theoretical benefit.
We often need to deal with user problems because the installation or update process was blocked by AV software without any user visible message. Also often an application is incredibly slow for some period after the installation because AV is doing some additional scanning/blocking (again the user is not informed about this and blames the application).
In startup land this is common - I've seen so many bootstrapped startups fail because they were out-spent or their market was monopolized by big companies or big VC money.
Sometimes it feels like we're going to end up with one giant tech giga-corporation that will just own everything and everyone will be employees of it.
So it's about profit, because the AV companies lose out in their historically most lucrative period to keep paying users.
It was a scary experience and it will take some time until azure will gain my trust. What would help is entangling microsoft and azure into a s structure like google has done with alphabet. With the current structure clashes of interests are inevitable.
User comment: "MacBookPro takes 17 seconds, my Windows machine takes 122 seconds."
Because of AV - fast on Mac, slow on Windows.
MICROSOFT KILLS OFF INDEPENDENT SOFTWARE VENDORS BY FOISTING ITS PRODUCTS THAT ARE IN NO WAY BETTER ON USERS
for the tweet? (The products are in no way better, not the users!)
I stopped using AV softwares a long time ago for the following reasons:
- It slows down your device (memory, cpu, disk access, etc.).
- It annoys you a lot more than it stops or solves any security concern. I've yet to hear from someone telling me their AV software saved them from an actual real virus... If this ever happens it's probably a damn advanced attack that even the AV software doesn't know about.
- It's extremely hard to remove, especially when pre-installed as a bloatware on a PC. Sometimes it's also installed as an extension of other software (browser, etc.).
- It usually takes wrong decisions (false positive) that lead to broken web pages, legitimate software that stops working, etc. And unfortunately the "standard" user has no way to figure out it's due to the AV. I can't count the number of times I had to work with my customers on figuring out what was making my website or software not run (or even not to install) on their machine.One time I had to write to an AV editor in order for my browser extension to be whitelisted. Never got any answer...
AV softwares can be easily replaced with common sense and a set of very simple rules.
- Have a hardware/software firewall that blocks everything expect what's required (allowing only web when initiated from the machine is enough in 99% of the cases). Every major OS now comes pre-configured with a software firewall which removes 90% of the threats.
- Use a strong email service or software (gmail, etc.). This way you reduce the likelihood that a virus, spam, or fishing email passes through.
- Don't open email attachments coming from unknown or non trusted senders. Even when the sender seems legitimate, double check that the email makes sense (not an unusual behavior), pay close attention to URLs, written language and words. Don't click links without knowing where it goes (domain name, https, etc.). Email remains the most simple way to install a virus or a trojan on someone's computer so be very very attentive when acting upon an email. If you use an email provider (like gmail), report the spam or phishing attack very quickly so that 1/it can be stopped quickly for others and 2/it teaches the Machine learning to do better next time.
15 years I've been applying these rules and I never got any virus without using any AV software. My devices run like a charm (PC or Mac).
While I'm a big defender of freedom and open source, I can easily understand and forgive proprietary OS providers choices with regard to the AV editors.
Sweeney had an argument, and one that I think Microsoft is trying to address. Anti-virus software (including McAfee and Kaspersky) is responsible for so many daily fuckups in my corporate computing experience that I am aggressively removing it from every computer I can find, and I tell everyone I can do to the same.
It is good that Microsoft is making them justify their existence, use less deceptive re-subscription tactics, and in general providing very stiff competition for them. In this specific case, it not monopoly tactics, this is pro-consumer competition.
I hope people realize this, because I think most windows users read this and then immediately squinted and said, "Kaspersky, huh?" It took me over an hour to scrape that gunk out of the last windows 7 box I set up for my family, and I was happy Win10 kicked it to the curb for me on the upgrade I just helped with.
Having anti-viruses installed is for fools. I just upload every single executable to VirusTotal.com and make sure I know the source I downloaded it from - this is far superior to any anti-virus and doesn't slow down your PC.
I said this when Windows 10 was new and I got tons of downvotes. I say it again because it still holds true and needs to be said.
Fun fact: If you bought one share of MSFT the 23.Dec 1999, you would be down 2 cent today.
Oh, wait, that's because iOS is orders of magnitude more secure than Windows and doesn't really need an AV product. Whereas Windows has been plagued by malware for decades. Nobody wants to buy AV in the same way that nobody wants to buy health insurance; it's an unfortunate necesssity in an imperfect world.
Unfortunately the tradeoff we're facing here is the "information feudalism" one. People aren't realistically able to secure themselves, so they end up having to pick a quasi-monopolist and delegate to them the ability to ban software. Such bans can be extremely arbitrary. Occasionally even your headphone jack gets taken away. But people put up with it because it works for them in a way that anarchy doesn't.
Microsoft would clearly love to make Windows behave like iOS: apps only installable from the store which has power of veto and takes a cut. Heck, Apple would probably like to do that with OSX. Neither has quite managed it yet.
I suspect the long term way out of this is a proper user-owned subscription-driven open hardware company, but that's a very hard thing to build and a hard sell to the average user.
Walmart/Amazon/Etc vacuum money and resources from the local (or "local-ish") community to their respective profit centers. It used to be small towns would be relatively self sustaining and much of the money staying within local boundaries, and while the cost of goods was higher at least buying them would support local businesses.
When a factory leaves a town or a company goes under from overseas competition it creates a big gaping hole in the local economy that frequently doesn't end up being filled. There are no companies coming back, and no capital available in the area anymore since all profits are sucked up.
Maybe all this is inevitable, but it doesn't quite seem so to me.
"Productivity" at the macroeconomic reporting level is total output / total workers. So productivity increases in manufacturing are divided by the total workforce size, which dilutes them.
Job growth is in the areas where productivity is low, such as health care and education.
Neoliberalism is a political project that has been dominant in the West since the mid '80s. Other elements of neoliberalism include:
- The decline of trades unions - Financialization - Power-biased technical change - Lower minimum wages - A meaner welfare state
It would now take a very large effort that would look like protectionism to bring all that back. Can Trump administration pull that off?
Thats disingenuous, you spend $5 less in total in your country.
At Convox we have been running Docker in prod for 18 months successfully.
1. Don't DIY. Building a custom deployment system with any tech (Docker, Kubernetes, Ansible, Packer, etc) is a challenge. All the small problems add up to one big burden on you. 6 months later you look back at a lot of wasted time...
2. Don't use all of Docker. Images, containers and the logging drivers are all simple great. Volumes, networks and orchestration are complex.
3. Use services. Using VPC is far simpler than Docker networking. Using ECS is much easier than maintaining your own etcd or Swarm cluster. Using Cloudwatch Logs is cheaper and more reliable than deploying a logging contraption into your cluster. Use a DB service like RDS is far far easier than building your own reliable data layer.
Again thanks for sharing your experience as a cautionary tale.
If you are starting a new business you should not take on building a deployment system as part of the challenge.
Use a well-built and peer reviewed platform like Heroku, Elastic Beanstalk or Convox.
Then proceeds to name every problem kubernetes fixes
I've talked about the relative immaturity of Docker as a used system (outside of dev)  and am struck often by how rarely people understand that it's still a work in progress, albeit one that can massively transform your business. The hype works.
That said, Docker can work fantastically in production, but you need to understand its limits and start small.
 https://www.amazon.com/Docker-Practice-Ian-Miell/dp/16172927... - working on 2nd edition, if anyone has any suggestions @ianmiell
 Blog: https://medium.com/@zwischenzugs
As someone starting considering Docker (and possibly Swarm), these seem to be pretty serious criticisms. Any experiences to corroborate / counter these two posts? Going by what's written here it would be suicide to use Docker, but many people are...
Could you elaborate on this? Did you settle on an orchestrator, and if so, which one?
What's the point then?
Don't get me wrong, docker is one of the most frustrating technologies I've used (partly because it shows such promise), but a lot of the problems he describes can be sorted with the most cursory Google.
>> Try using "ENV PYTHONUNBUFFERED 1" in your dockerfile
A lot of the issue I see him describe in production are fixed by Kubernetes. Compose works fine for local dev orchestration but pattern doesnt work for deployment. The ideal world would be that I can run my compose file on my cloud provider, but Swarm isnt there yet. I have to rewrite my compose file using kubernetes configs its not a 1:1 mapping but the high level connection are there if you think of Cabernets Pods as Docker Containers. He mentions orchestration across a cluster with Swarm is nasty, but its elegant w/ Kubernetes.
Obviously, there is no constraint permitting him to use a 3rd party service. Why to let Google Container Engine (GKE) or AWS ECR handle it for you?
Longer build times:
I think this is really where he is missing the mark. It sounds like he has a fundamental misunderstand that if you mount the source code in dev you have to do it in prod too and that you have to have 1 container. Not true: You can mount the source code in dev using compose, so you dont have to rebuild every time you change a line. Also, I think its a pattern in docker to try to keep your containers as atomic units of your app architecture. It sounds like they are trying to bake all competent of their app into 1 container (app + db + service, etc). Just break them up into containers, link them up w/ compose. This architecture them translates cleanly to one of the cloud providers for production: GKE or AWS.
DB and Persistence:
Yes, I think it is very clear that containers are stateless. So, yes if you want to run a DB in a container youd have to mount an external drive somewhere. There merits and risks of that are another discussion, but as he states its generally frowned upon to containerize a DB. (not completely sure why, some argument about stability of container and corruption of data) I talk more about this here: https://news.ycombinator.com/item?id=12913198
I think 12-factor style app containerized fit more smoothly into the docker compose style architecture. Accordingly if all you containers are logging to std out it, its all conveniently merged and printed out to the terminal if you run compose. Then on production, GKE handles it nicely too w/ the Logging system.
In conclusion, I think most of his problems would been avoided if he didnt skip researching Kubernetes and if he didnt make the mounting oversight. The other big oversight I think he did, at least he didn't mention, is that he has no deployment tool. I wouldn't be able to effectively deploy w/o a build tool like Jenkins. I talk about a lot of these issues and how to fix them here: https://news.ycombinator.com/item?id=12860519
The conclusion from these is not that Docker sucks, but YOU HAVE TO LEARN it. I agree that it's a very steep learning curve, but after the pieces come together, Docker solves quite a lot of problem and actually very useful.