If you don't know, John Mackey, the CEO / founder, is a major believer in conscious capitalism and of empowering his employees.
Whole Food employees get paid pretty darn well with some crazy good benefits for their industry and line-of-work (UNION FREE most of the time too!).
WF banks on them being true believers and motivators of the cause - including dedicating a fair amount of paid time to trainings. I've heard mix stories about how Amazon treats employees. I wonder how that will mesh.
So I guess I'm asking:
* What is going to happen with employee culture?
* What is going to happen with all the "Fair Trade" deals WF has in place that might not be the most economical decision now?
* Here comes store automation and hefty lay-offs?
Source: Worked at WF for 3 years
"Amazon did not just buy Whole Foods grocery stores. It bought 431 upper-income, prime-location distribution nodes for everything it does."
Groceries are one of the few large markets that require some proximity to customers due to costs and spoilage. Each grocery store is a type of mini-distribution center for grocery products.
Shipt and Instacart have succeeded to date because they use existing distribution channels and set up marketplaces for the "last-mile" of delivery. This is in contrast to Webvan in the early 2000's who tried to do grocery delivery by building their own distribution and failed spectacularly.
Amazon has become an expert in distribution and logistics. But it is clear that using their current model doesn't generally work with groceries (RIP Webvan, 1998-2001). Bananas need to be treated much differently than books.
So what does Amazon do? But Whole Foods!! A moderate sized grocery store with a significant national footprint and lots of higher income customers.
Now they instantly have a pre-built distribution channel that is already optimized for the grocery business (which again is much different than non-perishable consumer goods etc).
Things definitely just got interesting in this space!! I still believe that Instacart and Shipt can succeed, but they need to maintain a laser focus on making their shoppers and customers happy! And grow as fast as possible while Amazon digests Whole Foods!
[Note: I was the early CTO for Shipt responsible for building their grocery delivery platform and initial engineering team. Go Shipt!]
If this is about Amazon thinking they can turn things around for Whole Foods, then it will certainly mean drastic changes to price, selection, and employee structure.
If this is about Amazon using a brick-and-mortar chain as a tool to help Amazon's own ventures (e.g. grocery delivery, local storage for same-day deliveries, product return and support locations, etc), then it will certainly mean drastic changes what a Whole Foods store even is.
Either way, I can't imagine a course in which Whole Foods as we know it isn't basically over. Which doesn't necessarily bother me (I migrated to Trader Joe's and similar competitors long ago), but does seem like a big deal in the grand scheme of things. The grocery industry was heading in a Walmart-ish direction... and Whole Foods was almost single-handedly responsible for bringing a counterculture into the mainstream, and forcing all the other chains to reverse course and up their game.
Frankly I don't buy as much from WF anymore considering that what they carry is much more like "organic junk-food" than actual food.
If you really want to support the "cause", find yourself a local farmer or CSA to buy from and support them directly.
Alexa: "Buying Whole Foods"
The whole premise [of online grocery] is that youre saving people a trip to the store, but people actually like going to the store to buy groceries.
A bunch of smart people at Amazon have been thinking about re-imagining the next phase of physical retail. They want more share of the wallet, and habitual, frequent use of Amazon for groceries is the ultimate goal.
"Long term, a stronger grocery business could position Amazon to become a wholesale food-distribution business serving supermarkets, convenience stores, restaurants, hotels, hospitals and schools. "
"A group of Amazon executives met late last year to discuss the disadvantage Amazon faced compared with grocery competitors such as Wal-Mart and Kroger because of its lack of physical stores and customer apprehension about buying fresh foods online. They decided they needed something more to jump-start Amazons grocery push beyond plans already under way for the Amazon Go convenience store, modeled for urban areas, and drive-in grocery pick-up stations suited for the suburbs."
Not saying it won't be a game-changer, but it is fascinating to watch people extrapolate this out to the extremes as soon as the news hits the ticker.
To put it another way- if it were that obvious that buying Whole Foods was going to make Amazon the dominant grocery seller, why weren't people predicting that they would do it all along and asking what they were waiting for? It isn't until Amazon acts that people say "Oh yeah, that was the right move."
I watched this video a while ago about what Amazon Go is a precursor for. In summary, if AWS is renting out server infrastructure for people, then you can imagine that Amazon can make the infrastructure to lease out Amazon Go to other stores. They first integrate it with Whole Foods, then as customers expect no more checkout areas, they will only go to Whole Foods because it's faster and maybe cheaper. Then because customers want it everywhere, Amazon could force other stores like Walmart and Target to integrate Amazon Go infrastructure to their stores, without Amazon directly competing with these stores, and make a ton of money leasing it out without spending money on building whole new stores. Of course, the acquisition could also be like nodes for warehousing and delivery, but both avenues are not mutually exclusive.
Amazon is known to work backward from an imagined future press release, and then do the actions necessary to make that press release. How does Amazon see the future in this case?
-Amazon already has PrimeNow and Amazon Fresh, which offer a great grocery delivery service. For those who have tried these services, it's easy to see how addicting they are vs going to a physical store.
-I can't see Amazon using existing retail stores as distribution centers. I would think you really only need one grocery distribution center for each city in America, and PrimeNow (and AmazonFresh) already has that! Or, has Amazon determined that picking/packing from a retail store is actually efficient? Retail as a DC seems tough to automate, items are in the wrong spot, suspiciously missing, etc. I don't get it.
-I would have guessed that online grocery from highly automated distribution centers is where the majority of the market would be within ~20 years. Does Amazon, the king of online, not think that!?
-Or does Amazon just believe that they can run Whole Foods better than it is currently run?
-Do they just want the purchasing, existing relationships, etc to also sell their customers through other channels?
As a German I'm amazed by the "food haul" videos on YouTube and the positive feedback for ALDI US (belongs to ALDI Sd), Trader Joe's (ALDI Nord) and Lidl (part of Schwarz Gruppe, they just started in the US yesterday).
Amazon saw it and responded quickly.
Amazon is a cut throat profit making machine at the expense of human exploitation.
Wholefoods felt like the little guy who was trying to do things differently from mainstream supermarkets.
Whole Foods has millions of customers. Amazon will surely be advertising AmazonFresh or some re-branded form of it - such as "Whole Foods Direct" - to Whole Foods customers. Do Whole Foods turn into AmazonFresh warehouses? Possibly, but it's unclear how the two business will eventually integrate. It's also possible that Instacart gets acquired by Amazon, but for the most part I see them getting screwed in this deal. Instacart's business development deal is like bringing a knife to a bomb-fight.
The other major value this deal brings to Amazon is the industry-specific knowledge that the Whole Foods team brings. As a frequent East Coast, Whole Foods shopper, I am always amazed to see how much of their food comes from the West Coast/all over the world. I think the execs of AmazonFresh, who are mainly HomeGrocer/Webvan execs, appreciate the complicated logistics of doing the businesses. Amazon will be able to combine its software engineering knowledge/logistics knowledge with Whole Foods' expertise at creating an amazing grocery-shopping experience.
Now imagine if you are Amazon and you'd like all those people going to Walmart to just use your services. If I could get my groceries delivered to my door and only need to go to stores once in a blue moon I'd be really happy.
I'm surprised the article didn't mention this.
That's $1 for every year since the Big Bang.
Since I did not see it asked, any idea if we will see some new Prime benefit at Wholefoods?
I'm not shedding any tears for Whole Foods' "culture" now that they've been bought by Amazon. They were always a sort of sham-progressive company. In the words of Portlandia, "Whole Foods is CORPORATE."
I suppose if you're going to be corporate, might as well go full Amazon. They do it very well.
> an American supermarket chain exclusively featuring foods without artificial preservatives, colors, flavors, sweeteners, and hydrogenated fats.
I was waiting for this to appear (a common grocery store with seen-as-healthy stuff) but it already exists. TIL.
People seem to think it will be bought, but this would seem to be negative news because the price paid for Whole Foods was about half what Sprout's is trading at.
I feel this is more Mackey cashing out than anything else.
Whole Foods is about to change a lot and quickly.
What would be also helpful would be for Amazon to disrupt the wine and liquor sector.
How's this for an investment strategy... Long AMZN, short a basket of all other mid to large retailers & grocers.
In my area where there is a big college campus, they have been hiring for AmazonFresh devs for over a year. We have no AmazonFresh here but we do have a Whole Foods.
Mr. Bezos:- Alexa, buy me something from Whole FoodsAlexa:- Good. Buying Whole Foods.
1) Import cheap Chinese goods,
2) avoid sales tax,
3) destroy the environment (china),
4) destroy brick and mortar stores,
5) destroy small/medium business.
... I wonder how this will play. Is Instacart's business threaten by losing Whole Food as a client?
What is the relationship between a centrally planned economy as for example Gosplan was trying to run in the USSR, and the presence of these huge market-like but centralized entities like Amazon and Wal-Mart inside a free-market economy? This becomes particularly interesting because Amazon apparently does not have any particular interest in turning a profit.
A central problem faced by Gosplan was the collection of high quality data about supply chains and the estimation of the utility function of consumers. I would say Amazon is in a pretty good position to do both right now.
Another question, somewhat related: At what point does it become profitable for Amazon to lobby for more redistributive taxation? This might sound paradoxical because you would assume that Amazon represents the interests of it's owners, who would probably suffer under such a taxation scheme. But shouldn't there be a point at which giving more disposable income to poor people will boost the overall income of an entity Amazon (since Amazon doesn't sells many flat-screen TVs but not as many mansions or yachts)?
What other ideology than fascism best describes government coercion of its citizens to engage in business relationships with the insurance companies to ensure their profits remain at certain levels?
Sometimes I forget just how big Amazon is.
... But it's nice to know I can get my artisanal, single-source, Fair Trade, organic, small-farm, no-GMO cucumber water with one day shipping now.
I'm building an app to help you easily email your congressperson and ask them to create legislation requiring space/tab equality. This has to stop. Please consider donating to my Patreon.
Brace yourself: I use two spaces after the end of sentences too.  I am quite the rebel.
Modern IDE's (Sublime Text) let you easily convert spaces to tabs (or vice versa) and intention length of existing code. 
* Joe likes 4-space tabs, I like 2-space tabs, and Jane is old-school with 8-space tabs.
* All goes well until someone aligns something visually, like so:
* Now it aligns perfectly on my machine, looks mostly ok on Joe's machine, and is ON MARS on Jane's machine.
Thus one-or-more of three futures happens:
* Someone implements a code re-formatter into version control
* Someone re-aligns the code, starting the process over again.
* Someone calls a meeting and demands we all switch to spaces
I generally prefer tabs because I feel that they're more egalitarian: I like 4-space indentation, but don't want to force that on everyone encountering my code. Similarly, I find 2-space indentation very hard to parse in most languages, so I don't want that affecting me if I can get away with it. While this is possible with spaces and maybe a series of Git hooks, it's trivial with tabs.
On the other hand, I always use spaces in languages like Python or Ruby where there are well-codified style standards. I also always show invisible characters on any editor which allows it, and have cleanup scripts to ensure that whitespace is standardized across any non-vendor code in the project.
Maybe most tab users don't feel this way? Maybe most aren't as careful/picky as I am? Maybe tabs are more popular with younger devs? But I feel like tabs can offer more interoperability than spaces when many coders are working on the same project when the language/community doesn't strongly specify whitespace.
I would expect there simply is a confounding factor that the author did not look at. Maybe the info is not in the data.
I can imagine that the space/tab choice is related to the "upbringing" of the developer. Maybe which language or editor they used first in their life.
Or maybe it's related to culture. For example when using IRC, tabs are usually not used to communicate. Maybe that impacts the general choice of tabs/spaces.
Or maybe more sophisticated users tend to exchange the tab key for something else:
This means you consider consequences beyond "but it works on my machine" so you're a better programmer. Ergo, higher salary.
Sure, but how many spaces? grabs popcorn
From the Wikipedia article (https://en.wikipedia.org/wiki/Tab_key):
> A common horizontal tab size of eight characters evolved, despite five characters being half an inch and the typical paragraph indentation of the time, because as a power of two it was easier to calculate in binary for the limited digital electronics available.
Why someone decided to round up to 8 instead of down to a much more sensible 4 spaces is beyond me.
We finally compromised and we're using 3 tabs.
Only, in Go, you don't have a choice -- gofmt enforces tabs only (with spaces for alignment). So something seems odd there.
The answer, as always, is: lisp with parinfer - makes the whole debate irrelevant.
Unlike "traditional" formatters, it parses your code into a syntax tree completely disregarding any original formatting, meaning the output is entirely consistent. It's pretty liberating to devote zero time to manually formatting and can make code reviews more constructive and less superficial. It is what is is, and it's pretty opinionated based on Facebook's code style. Works great for us, enforced with a git hook.
If a tree falls in the forest, etc
Case closed! :)
Because bigger corps generally set company wide standards on the code indentation and that more often that not prescribes spaces.
People who use spaces are just better than those who use tabs.
If you care about clean code, being orderly and organized probably extends to other areas, too, and that helps you make more money.
In my experience, developers who mostly use default settings are often unorganized and easily confused. They also know very little about the systems they are working with, because everything outside their IDE doesn't interest them.
I'd also bet that many tab users had to check what they use, because they didn't know or care.
tl;dr: Developers who change the settings are more dedicated to their job.
The Go community was on to something with gofmt (even if they did decide on tabs).
That said, in the imperfect world we live in, I always use spaces. "Indent with tabs, align with spaces" is obviously not rocket science, but its just too opaque unless you have a strong code review process.
When I was working on my own small projects I was using tabs, or tabs combined with spaces, which yielded me not a lot of money.
Once I started working for a big corporation, the coding standards mandated by the company meant I could only use spaces, because people couldn't be trusted to use a nice space/tab combination.
These people have a particular gusto in constantly one-upping each other with the latest good practice; the one that adopts the highest number of good practices wins. Their constant talk of the latest fad and push for the "right ways" of doing things usually puts them in positions where they end up evaluating and hiring new developers (I got interviewed just the other day by somebody that didn't ask me to design or structure any code, but rather if I use == or ===).
Some of these are actually excellent developers nonetheless; others will drive entire teams into rewriting a perfectly working application into a completely useless mess of a thousand microservices. Endeavour that will end up in their CV anyway, helping them to find another excellently paid job once it's time to migrate.
Set the tab stop to what ever your project/team style guides requires. Tabs to spaces is just plain stupid. Why on earth would you ignore the x09 character, a single character that exists for this exact reason, and replace it with multiple x20 in the file. Just set your tab stop to what you like to look at, be it 2, 4, or 42 characters. By the way the default is 4 characters for most editors/IDE's.
vim - :set tabstop=4vscode - to your settings add the following. "editor.tabSize": 4
Everyone else's comments are moot!
In a perfect world you'd use tabs for semantic indentation and spaces for stylistic indentation but this is too hard to implement in 100+ person teams and also can't be automated via an IDE style sheet.
I'm not a PHP guy (so I'm not sure about this) but it looks like PHP-FIG suggests it too... http://www.php-fig.org/psr/psr-2/
So are we really saying software developers who follow style guides earn more? That doesn't surprise me. Adhering to guidelines is a good way to work well on teams and thus become a more valuable team member.
If the 95% confidence intervals for tabs and the 95% confidence intervals for spaces intersect at a point, there is a possibility for failing to reject the hypothesis that the difference between the two is nonzero at the alpha = 0.05 level. Since there is little overlap in most cases, the original hypothesis holds.
So the model actually predicts causality, instead of correlation? That's amazing. I'll start using spaces instead of tabs today and I will ask for a 8.6% raise.
According to this model, I should get it!
Who do I contact for my check?
With this, I instantly conform to how the file is formatted. Is it 3 space indentation, made with a mixture of 8-tabs and spaces? Autotab will figure it out, spit out the Vim params, and you're modifying away without causing spurious diffs in version control.
You have to learn to use Ctrl-T for indent and Ctrl-D for deindent in Vim; those obey the shiftwidth and generate indentation according to the shiftwidth, tabsize and expandtab setting.
I don't really care that much, Go says tabs so whatever. But spaces have the benefit (or drawback to some) of rendering exactly the same way for everyone (assuming fixed width fonts, does anyone code with proportional fonts?). Also I've always got two thumbs on the space bar. And it's much bigger than the tab key.
So settled then?
Or do some people actually use the space bar to indent code? (which is obviously insane)
I never thought that such styling would really matter much ... I wonder how much a developer using a beautifier earns in average..
Anyhow, statistics sometimes brings up some weird conclusions.
Am i missing something here? this sounds really dumb, as tabs make the most sense, and it appears use less memory as well.
I use vim with vim-sleuth now. If anybody knows how I can achieve what I described above in vim, please tell me how.
Then somebody asked me why. The answer was that Sun Solaris was a crappy operating system which would fail to boot if you used tabs in files like /etc/vfstab.
For some odd reason, I carried around a weird bias about tabs rather than regarding Sun as being shitty.
I worked somewhere where a bunch of folks preferred four spaces (because they came from Python), others two (because they came from JS). Use tabs, set your preferred tab size, boom, everyone gets along.
That someone with this many different types of experience can have accidentally avoided encountering whole class of people and their code, really puts into perspective how small the experience of any 1 person is.
Do Visual Studio and SSMS support the space equivalent of "Select X rows and tab them all at once"? I just tried now and all the code is wiped out, replaced by a single space.
I think it's the best of both worlds, really.
It lets you check in an .editorconfig file that specifies whether your project uses spaces or tabs. And a bunch of editors and IDEs already have built-in support for it!
Doesn't solve the holy wars, but it can sure help reduce the friction.
... "format": "prettier-eslint --write --trailing-comma es5 --single-quote true \"_src/**/*.js\"", "lint": "eslint \"_src/**/*.js\"", "precommit": "npm run format && npm run lint" ...
"There were 28,657 survey respondents who provided an answer to tabs versus spaces and who considered themselves a professional developer (as opposed to a student or former programmer). Within this group, 40.7% use tabs and 41.8% use spaces"
Without filtering to the 'professional developers', meaning overall, there are more tab users (32% vs. 28%).
(Am I wrong? I would hope a Data Scientist would know a basic thing like this, but I don't know R so I can't tell for sure from their code.)
...but at least with tabs everyone can adjust the gap size to their liking. :>
Boring corporations like boring spaces, and have to pay big, boring salaries to get any talent at all.
On the other hand, cool code slingers may or may not prefer tabs out of personal idiosyncracies, but as long as all of them get shortchanged by the VCs and/or startup founders...
Use whatever's right for you! And, if you come to a workplace where there are "rules" about that, try to obey them.
Never take part in any of those wars of Tabs vs Spaces, VIM vs Emacs vs Sublime vs whatever.
Spend time on writing more tests instead!
It's because they get more keystrokes in!
Depending on your tab/indent settings you might get as much as 4x or even 8x the XP by using spaces.
Is it possible that tab-aversive people making hiring decisions act on their aversions (consciously or unconsciously), while tab-friendly hiring managers do not?
Now that's a fact for your next job interview!
I mean how are we to achieve world peace when we still have people using and being awarded for wasting precious bytes.
Tabs are more often used in languages like C and C++ which are more traditional and pay less despite being more technical.
Indeed, it does means that.
The internets giveth, and the internets taketh away.
Many of those who answered tabs are actually using an IDE which inserts spaces when they push tab. They believe they are using tabs, because they've never realized that this is going on. People under that misapprehrension are likely to be less skilled.
Additionally, if a coder, is in fact, deliberately choosing the use tabs, they are going against the majority opion of coders and almost all style guides. That attitude might be correlated with lesser income.
Spaces are for people still using 80 column monitors.
I don't think the fact that you use spaces automatically makes you a richer programmer.
You tabbers are costing me money!
Use and respect .editorconfig files in your projects.
Let's move on to ASCII vs Unicode ....
Are you guys all programming in vi or notepad or something?
Eventually leadership got annoyed at the amount of time developers were wasting punting code reviews back and forth over this silly nonsense, let alone the loud altercations around the office. Who ever could have guessed that developers would be such an opinionated bunch?
So they mandated spaces, and all was peaceful in the office.
For about a day.
Naively they put something along the lines of "spaces are to be used for indentation" in the code style document, but failed to specify howmany spaces.
So the new arguments started up amongst the office. 3 spaces or 4? Whoever could have guessed that a number of developers were actually belligerent types who would go out of their way to find something to argue about, and also stubborn? Such a rare trait in developers.
So the arguments raged again, and eventually management decided they'd had enough. After all the fuss and grumbling over making an arbitrary decision on the tabs vs spaces debate, they decided this time to be democratic.
They scheduled a big all-hands meeting for the developers, and tolerating no interruptions, outlined that a binding vote was going to be taken. The code style document would be updated to reflect the democratic consensus, and also warning that future arguments on any other points would result in verbal warnings, and potentially dismissal.
With the software development managers standing at the front each to independently do the count, they asked all developers in favour of 3 spaces to raise their right hand, and all developers in favour of 4 spaces to raise their left.
The count started, but soon the managers realised that with all the raised hands, they couldn't see the fours for the threes.
Correlation does not imply causation
Just a guess.
Meanwhile, a group of Finnish researchers are organizing a review boycott  against Elsevier, one of the reasons being Elsevier's unyielding opposition to the Finnish libraries' OA requests .
In the past the cost of papers was paid on the demand side and borne communally, now the cost is paid on the supply side. Science still values paper counts and citation counts - but it seems to me that folks who can afford publication now have an unhealthy advantage that they didn't used to!
Maybe if America had open access, things would of turned out a lot better for Aaron Swartz :(
Just kidding of course, this is great news. The EU should still be the main beneficiary of open access science following this policy.
May. 27, 2016
Note, that here the "product" I'm referring to is the final formatted article. If governments want to mandate that universities release internal versions of their published works that seems fine, but that work should be for the universities or governments to undertake. They should not be allowed to release Nature's formatted/published version. This is how Pubmed Central works currently in the US (unformatted manuscripts are released, not the journals' version). When Nature releases an article, they put a lot of work into formatting it for publication so it looks nice. That final product does and should belong to them.
It's fine if people think that publicly-funded research should be freely available. But the fact remains that scientists have been voluntarily publishing their work in private for-profit journals for 100+ years. You can't just "undo" that. And they're still doing it today. If scientists truly felt strongly about these issues they'd only publish in OA journals, but most of them don't care (source: I'm a scientist).
This seems to be a shot at WhatsApp and Signal, implying that they have loopholes that allow the FBI to snoop in. I'm not sure how true that is. This might be an attempt to deflect from the fact that Telegram uses a home-baked encryption protocol which might be insecure, while WhatsApp uses the OWS protocol.
In 2003-2006, we built a service that was a financial system to exchange financial data through various means including AS/2 EDI over HTTP with big companies and the government suppliers such as AAFES (Army and Air Force Exchange). Initially we had RSA, PGP and a custom encryption in there, the latter two for other features besides EDI. We got a letter from the FBI asking us to switch only to RSA, they wanted to know about our use of PGP and wanted to see our custom encryption if we continued to use it. Being a small/medium company we switched to just RSA to avoid any issues. It was an odd day, when I came into the office they told me I had an FBI letter on my desk and you can imagine what happens around an office when something like that happens. Very strange day indeed.
Moral of the story, if you create your own crypto or aren't using the ones you are supposed to use, in any capacity, expect some knocking.
One also has to wonder if the FBI consider the Telegram team to be essentially undeclared Russian agents, and hence fair game.
A journalist like Poitras is on all sorts of lists and incessantly harassed. There are secret courts, secret laws and secret processes at play. And beyond this the power of harassment, intimidation, blackmail and bribery. Individuals and even organizations cannot prevail against the array of capabilities.
Its nice to think of democratic theory and the rights but these only exist when not exercised as talking points. The moment you start exercising them you end up on all sorts of lists, marked for harassment and basically have a target on your back. Dissent is squashed even before it can formulate.
But my first reaction was "Cool, our government really cares, is creative and has the necessary power to get things done."
For those of you who've worked with government, you've seen how insanely difficult the procurement process is. Being as specific as needing to get competitive bids for toilet paper purchases, etc. So the fact that they could get potentially large amounts of bribe money means (a)This goes to high levels in the organization (b)They've probably done this before.
I wonder how much they offered?
And I wonder how many other pieces of software have backdoors. I would think the first things they would try and get access to is (a)Certificate issuers and (b) VPN software.
Do we know that Godaddy,LetsEncrypt, OpenVPN, Cisco VPN, Juniper, etc don't have backdoors?
Sure, you can lock up all communication for privacy reasons, and the government can spend all kinds of resources on trying to control to prevent or circumvent encryption - however it's a waste of resources as it's simply a bandaid.
If I wanted to do something violent or evil I/you can simply have regular meetings and use paper communication - the old spy-style stuff. Of course those networks can be infiltrated by governments with the resources, and they can maintain that presence by allowing certain acts within networks to occur vs. deciding which ones they should stop; it's how the war against Hitler was won once their encryption was broken - watch the very well-done The Imitation Game - http://www.imdb.com/title/tt2084970/ - for a reference.
The only real solution is dealing with the root causes. I heard an analyst on TV (a rare occasion for me) mention after Trump's Saudi visit and speech, that he didn't mention that the Saudis should look into the root causes of why there is terrorist activity growing in their countries; of course a lot of it is historical karma and rage from violent acts against their families, however a lot is because people's basic needs aren't being met which prevents the higher levels of Maslow's Hierarchy of Needs from being reached and maintained.
There's a solution and it requires building real community, locally, where you are now - and striving for people to become healthy so they don't develop bias and other coping mechanisms which prevent empathy and understanding and therefore compassion; preventing responsible ownership of weapons isn't useful either, not developing and supplying weapons on mass would be beneficial, however most attacks recently have been with vehicles or knives.
Universal Basic Income will also allow closer to a truly free work market and it can evolve from there, giving people the time to do what they feel is the most important in that moment for themselves, while not having to be forced to working in a shitty environment with shitty managers or co-workers; the health improvement and increased productivity here alone is worth it.
and, "before going to monterey and while exploring the beauty of san francisco i was contacted once by a us navy intelligence officer who seemingly unintentionally appeared next to me at the bar"
And if such PR herding worked, wouldn't the surveillants be prepared to pay for such efforts to make their job easier?
So, what seems readily apparent is: Telegram takes state money, to offer an insecure option, while dissimulating to the world that it's: a) secure and b) turning down state money all the time.
I know why this perspective isn't discussed in MSM. But I don't get why it's not discussed more here. It seems obvious to me. And personally IMHO, I think that's a good thing. Catch more criminals / terrorists.
I mean, simply use a public/private encryption algorithm that has proven to be highly secure:
- Share your public key openly
- Anyone can send a message to you using your public key to encrypt the message
- You decrypt with your private key on device
Do all the encryption/decryption on device and viola, secure messaging. (This is basically how https works.)
Of course this only allows a single device the ability to decrypt the message.
However, if you want to allow multiple devices to share a private key, they can simple send each other their own private keys using the same encrypted protocol.
In addition, for super paranoid use, a master password could be used to salt the private key so that would be required with the private key to enable decryption. (Which is similar to how password keepers basically work.)
What am I missing?
Option 2: Could be true because seriously, who trusts the FBI/NSA not to violate our privacy anymore?
Really not sure what to believe about this one.
I used to wonder whether some success of social media companies couldn't be explained by secret payments for backdoor access. You could be operating out of Europe or Africa and still get offered money, and other pressure carefully applied.
You might think you'd hold true to your plan of privacy-for-all, but if they offer $x00m or more?
Especially considering how that competitors like Signal are US based. Signal is owned by twitter which by no means is a small player, so it isn't likely to fly under anyones radar.
I think we can all agree that if some totally below-the-radar crypto anarchist who happens to have a few million dollars from bitcoins figured out that they actually have enough access via the dark web to bribe a few Russian generals and long story short detonate a nuclear bomb a few miles outside New York City, just for shits and giggles, then they should be stopped at some point along the way. This will seem like a made-up example to you but I purposefully don't want to confuse the issue with practical examples. We can all agree that at some point this should be stopped.
A reasonable time to stop it might be if intelligence agencies get a literal screenshot from a darkweb chatroom (from a concerned participant, where the participant thinks they're really going too far) where this is being planned in exacting detail but more information is needed to be precise. (For example, suppose the source of the nuclear bomb were not Russia but not enough information was given to identify it. There are actually quite a few nuclear states and many of them are quite corrupt. A short list includes India, North Korea, Pakistan.)
I would think that this kind of actionable urgent intelligence should unlock whatever privacy safeguards are in place, but the issue is that if there is a correct "technical" solution (if cryptography works 'correctly' and is not broken, in an academic sense), then there is no technical possibility to unlock anything. If Tor, crypto currencies, and encryption "work" (in a binary, yes it works, or no, it's broken sense) then following the receipt of such a screenshot there is no technical means of any further step.
Here I'm going to be philosophical for a second. The future of technology is nearly infinite human power. You can already in the next few seconds initiate a crypto currency transfer to anyone anywhere in the world, who can receive it without any banking infrastructure or oversight.
The arc of technology has been personal human enablement. When individuals become nearly God-like and all-powerful, it is dangerous to be in a position where, like the Muslims reporting the madman banned from his U.K. mosque for radical insanity, the status quo is that if you report your friend to the authorities saying, "My online friend, God-like in his powers, is planning to murder a million people just for shits and giggles, and he's kind of insane. Unfortunately, I don't know where he is or what he's doing, but I'm pretty concerned. He has a lot of money from a few ponzi schemes he ran. It's pretty credible for the following specific reasons (screenshots, quotes, etc)." And the only response from the authorities is, "Thanks for all this. We don't know where he is either, in the grand scheme of things a million deaths isn't that much and if it happens we will look at preventing another such case."
That's a pretty silly response, isn't it? That the only possible response is, sorry, nothing can be done.
Okay, now I've laid out why there should probably be some infrastructure on the back-end.
What I don't like is that this translates to humans literally reading people's private correspondence, web searches, etc. It's not very good.
What is a good middle ground?
Can't the NSA make things that run locally, so that no human is reading your correspondence or web traffic, but as you start researching nuclear weapons and making plans on how to murder a million people, and start making those transactions, all this starts adding up and, to quote the Constitution, its tools can receive instructions "particularly describing the place to be searched, and things to be seized", so that after such a report, its perpetrator can be found, or at least enough information can be collected to stop it if it is actually taking place?
I think that all of us here could be okay with being stopped at some point between purchasing a hundred million dollars in anonymous currency, and detonating a nuclear bomb. It's sensible. That can be part of the social contract.
It's difficult. Nobody wants to live with a judge, jury, and executioner in their home looking at everything they are doing in case they break some law.
I am glad that I personally don't have to answer these questions. But we can all agree on the need for privacy (no human looks at what you're doing), and also on the reasonableness, as each individual online progresses toward infinite personal power, for protecting the rest of society from credible and immediate, specific threats.
I agree with cryptographers who think of cryptography as a tool that is either working or broken. (If it has a back door, it's 'broken').
Perhaps if tools included a certain portion that runs locally they could increase the extent to which the tools are not actually 'broken' (i.e. they are actually working, and actually not backdoored), while also increasing the safety every single person has from other individuals being able to plan or pay for their specific death anonymously, and with impunity.
I realize that my suggestions here are not specific enough to be actionable, they are not clear recommendations. But I don't even see these possibilities being discussed (at least publicly), so I wanted to at least move the conversation a bit in this direction.
I'm getting downvoted pretty heavily. Let me ask point-blank: are you okay with someone being able to spend two weeks on the dark-web researching how to make and detonate a bomb using totally innocent chemical purchases, and then your spouse, parents, relatives, or you, being an innocent victim of my exploding the results, or would you want that person to be stopped at some point after they started doing that? The future of information is that it is ubiquitous and easy to access[I edited this paragraph edited from first to third person.]
Actually secure communications would mean that it is technically impossible to see if someone has started communicating with people at ISIS who have overseen and helped people explode themselves. I am not saying communication should be weak and insecure, but should I really practically be able to start doing that if I want?
This is not some kind of false example, either.
Also, for downvoters: I think it is easier for you to agree with the other half of my statement, that nobody should be looking at our web traffic and correspondence, and that it should be actually secure, and also actually private.
The abilities of a technician can be very valuable to a business, but especially as it begins to scale the owner/operator(s) need to adopt different mindsets in order to succeed. In short, if you don't like the idea of spending most of your time on business or marketing stuff, you should find someone who can handle those, or perhaps be a solo consultant/contractor. (I think this is a large part of why YC encourages cofounders so much.)
Exceptions certainly exist--there was a time when tech was a magical world and you could do magic things just by being an expert engineer--but increasingly I feel they are getting rarer.
Started out when I was eight, I got into electronics. Read all magazines, learned myself how to design electronic circuits from library books. Kept my own, hand written, library card system describing the specs of all transistors, ICs I could get my hands on. Designed/built and repaired devices for other people who paid me for the materials and for my trouble. I was 15. When I was 18 I went to university, switched majors multiple times.
And then the Dutch electronics magazine published the Junior computer, based on the 6502. I spent all my money on it, and learned assembler by inputting hex numbers. After that came the MSX computer (I disassembled the BASIC interpreter to grok how it worked) and I started searching for programmers jobs.
Found a job at KLM where I got out top of the class and entered a special called SMART. For special internal projects. All of us programmed in IBM S370 assembler, they tested C but it was too slow.
I was 25 by then. The following years went downhill. In IT. I changed jobs multiple times, but the companies kept going out of business. I was flabbergasted at the amount of incompetence I saw in salespeople and at C-level. I had no idea, coming from a blue collar background.
Side note: in 4 years I had 9 CEOs, 8 of which left their wife for their secretary in the time I worked there! I had a lot of respect for the 9th until I found out a couple of years later he'd done the same after I left.
So I decided why not start my own business i was capable of going bust as well couldn't do worse as those guys. So I started the first commercial ISP in the NL. One thing led to another and many companies later I now pulled out of most and again starting as a founder and learning all the new hot technologies.
If you aren't in a country with proper healthcare and are not earning AT LEAST $300k USD a year (consistently), understand that all your years of work can be destroyed by one diagnosis. And plan accordingly.
I love my life. I've had a charmed existence moving to wonderful locales and doing what I wanted when I wanted; but the genetic lottery cannot be outwitted. You can be healthy one day and in debt the next.
Plan accordingly. Don't let youth and good health lull you into complacency.
It's completely possible and attainable for software developers to be independent anywhere on the globe, but understand the potential financial implications and limitations of the social safety nets of your country of citizenship/residence. Plan accordingly.
1) You can keep your job. Just always build things on the side. It keeps you coding for play and not just for work.
2) Try to sell the things you build. You don't need to be fully polished on day one. In fact, you shouldn't be and can't be because you need real customers to really understand their needs.
3) you can find the numbers online, but my saas project Cronitor had just $500mrr after seven months. You need patience and to adjust your work factor to match the available outputs. By letting it coast a bit while it picked up momentum we prevented burnout. When it started to grow faster we could pour some attention in and level up the product.
4) grow it while you work your day job. This is easy at first and grows harder. Having a partner is important here. Alternative: a business where a little downtime is not a big deal.
5) when it gets stressful, know your commitments. Your day job gets first bite and when you can't do that anymore you know it's time to move on and do it full time.
Most importantly the tldr is: quit your job after you've replaced most of your salary. And before you quit enjoy the incremental income.
"Ok, Im going to send you on this course, its a 20,000 guilder expense on my side. If you let me down, youre out and I never ever want to see you again, if you pass the examination at the end youve got a job as a junior programmer."
Stuff like this can make a world of difference to the trajectory a young person's life takes. Much respect to people who are open and inclusive like this
You need a good chunk of capital to survive the first few years, before you even think about expanding. (Which, when it's time, you must do, or risk shrinking into oblivion. And not everybody is able and willing to expand forever. I wasn't.)
Just sharing my own little caveat; otherwise I think starting your own firm/consultancy is fantastic.
The biggest thing I have learned so far is it's really not about "hard work". Imo, its much more about how well you balance work and life. Are you on an unsustainable path or are you on a sustainable one? Are you enjoying what you are doing? This is critical. Are you genuinely eager to work on it and can you sustain that after one year? If so, you're highly likely to succeed in my opinion. If you believe in it, and love what you're working on, it's very likely there are other people out there who do too.
Had to fold the business. Nearly lost my house and marriage. This was in the late 1990s.
For every story, there's an equal and opposite one.
Statistically, you can't do it, and you'll waste a lot of time, money and prime earning years figuring that out. That's not to say you shouldn't try, but only to say you should be realistic about what you are likely sacrificing for that small chance of success.
> If a high school drop-out with nothing but a typing diploma could do it, so can you. Now go do it.
this is not representative. From the reading you seem to be "smart" too apart from hard working. =D
In general, of course. In some specific cases, maybe not.
For you to have a job by being hired by an employer, someone else has to create that job. In the private sector, usually someone has to start and own a company, make it successful, and generate enough free cash to pay you.
So, if you are looking for a job at all, you are essentially admitting that it's possible, reasonable, common, doable, etc. to start and own a company and make it successful.
So, why not you? That is, if you want job, especially a good job, then consider creating that job for yourself.
I'm certain companies could benefit immensely by contracting with someone to help improve their Ops game, but I'm that awkward stage where I'm not 100% sure anyone will request my services.Wish me luck!
"I don't know if we each have a destiny,............or if we're all just floating around......accidental-like on a breeze.........But I think,......maybe it's both" - Do we have defined destiny? or Can we influence the destiny with our actions? The answer is not black and white.
Life is like that, it is part luck and part hard work/smart work. The context, the skill set etc is entirely different for every single individual. So you can't really learn much from patterns.
On an ending note-"- Do you ever dream, Forrest, about who you're gonna be?-Who I'm gonna be?............. Yeah. .................Aren't I going to be me? "
I mean, my teammates all quit as soon as they realized how much hard work a startup requires... Life lesson!
And, in actuality I haven't made much progress toward an actual company. but, I mean, this has been pretty great in some ways. Best part is my mentor was an expert in my field (automation, robots, etc.) and has really had a lot of fantastic input for me.
My key takeaway from the competition? Don't try to build a start up on your own. You really MUST have a strong team backing you up.
I'm probably going to crumple up this owl and start another one soon. Most importantly if you want to get good at drawing owls, you have to love drawing.
American Institute of Mathematics Open Textbook Initiative -- note that they review the texts too and are a bit picky about what they list: https://aimath.org/textbooks/
More than just math: University of Minnesota open textbook initiative. Stats, CS, and humanities as well: https://open.umn.edu/opentextbooks/
Not a repository, but an individual free/open math text under development -- comments and feedback desired: https://www.softcover.io/read/bf34ea25/math_for_finance It starts with elementary probability and then combines probability and stats with linear algebra, multivariable calculus, and differential equations. Aimed at folks who have seen the math before but need a refresher and a viewpoint that unifies seemingly disparate topics. Note that it uses Softcover, a great way to publish technical texts to several formats at once.
Is there anyone who has done something similar who might share some suggestions for success?
Calculus Revisited: Single Variable Calculus | MIT https://ocw.mit.edu/resources/res-18-006-calculus-revisited-...
Calculus Revisited: Multivariable Calculus | MIT https://ocw.mit.edu/resources/res-18-007-calculus-revisited-...
Complex Variables, Differential Equations, and Linear Algebra | MIT https://ocw.mit.edu/resources/res-18-008-calculus-revisited-...
Linear Algebra | MIT - https://www.youtube.com/watch?v=ZK3O402wf1c&list=PLE7DDD9101...
Introduction to Linear Dynamical Systems |Stanford https://see.stanford.edu/Course/EE263
Probability | Harvard https://www.youtube.com/playlist?list=PL2SOU6wwxB0uwwH80KTQ6...
Intermediate Statistics | CMU https://www.youtube.com/playlist?list=PLcW8xNfZoh7eI7KSWneVW...
Convex Optimization I | Stanford https://see.stanford.edu/Course/EE364A
Math Background for ML | CMU https://www.youtube.com/playlist?list=PL7y-1rk2cCsA339crwXMW...
I'm currently working through Udacity's Self-Driving Car Engineer Nanodegree; if everything goes well, I should be heading into Term 3 soon.
What is painfully known to me, before I started this course and now in the middle of it - is my lack of certain education in mathematics.
Particularly that of stats/probability - but lately understanding the basics of calculus, namely that of derivatives and integrals. So I would like some assistance - namely, what are your suggestions for me to remedy this, after I finish the Nanodegree?
My thoughts have been to take a reprieve from coursework, then maybe next year launch into something more. Maybe more MOOCs or other online course or resources (like these books) geared toward learning this material. Or perhaps taking a course or two at a local community college? Perhaps I could audit a local (ASU West here in Arizona would be closest) mathematics course? Or maybe do some other kind of formal online study (I have considered getting a BS then an MS via an online school).
I seem to do alright with MOOCs "at my own pace" - but I also do well in a more structured system, with a set syllabus, schedule, and testing.
I just want to see what others think might be the best approach, in order to assist my decision in the future. Thank you all for any suggestions and such.
I'll never forget how the math professors would switch from edition x to edition x+1 with the only clearly visible difference being the homework assignment questions.
I truly hope that this is not just a trove of books, but also a signaling of the change in culture from opportunism at the expense of the students to openness.
That alternative is the books published or republished by Dover publications. They like to take older textbooks and purchase rights to republish them as relatively inexpensive paperback editions. A very large fraction of their books are under $20, with many under $12. A few are more expensive, but only rarely more than $30.
The level ranges from suitable for high school students to graduate level and beyond.
Here's their mathematics section: http://store.doverpublications.com/by-subject-mathematics.ht...
Don't overlook the "general" subcategory. They have some wonderful problem books there, such as Yaglom and Yaglom's "Challenging Mathematical Problems With Elementary Solutions" series.
They also do this for physics, chemistry, engineering, history, economics, computer science, biology, earth science and more.
This list's a couple years old, for machine learning, including basic lin.alg, prob/stats: https://www.reddit.com/r/MachineLearning/comments/1jeawf/mac...
- Deep learning book by Goodfellow et al,http://www.deeplearningbook.org/ (the one by Michael Nielsen is good as well)
- Foundations, excellent text: http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning... Shalev-Shwartz, Ben-David
- https://www.cs.cornell.edu/jeh/bookMay2015.pdf, Blum, Hopcroft, Kannan, probably an older version
Professors William T. Trotter  and Mitchel T. Keller  Applied Combinatorics [3,4]
I can't comment on the deeper parts of the book, because I don't get it yet (I don't really have the time atm to slog through a 900 page book, as much as I'd love to)
"Hell, if I could explain it to the average person, it wouldn't have been worth the Nobel prize." 
Showing a limitation of the maxim or Feynman's hubris?
If you rehash it in smaller words, just by information density alone, aren't you guaranteed to be losing some detail?
This is the sort of thing you'd believe if you were an arrogant 20-something who thought they could learn any subject in a few hours, cushioned thoroughly by the illusion of understanding.
"Oh yeah, I understand the mechanisms of human vision. It's just rods and cones."
"I understand the causes of the American revolution. It was just people protecting their property."
"I understand Joyce's Ulysses. It's just follows three people from Dublin over a single day. I read the Cliffs notes."
"I understand why coffee makes me alert. It's just blocking some brain things that make you sleepy."
Now, I will agree that if you don't know how to break interactions down into teachable parts, you will probably have trouble as an engineer or scientist both advancing your own knowledge and introducing people to the field. But to suggest that your understanding of a subject hinges on being able to deliver an explanation in simple terms is just silly.
> I really cant do a good job, any job, of explaining magnetic force in terms of something else youre more familiar with, because I dont understand it in terms of anything else youre more familiar with.
The article implies this is a case of the scientist expressing that he didn't understand a thing. But watching the video in full, one realizes he is saying something different:
"It's a force which is present all the time and very common and is a basic force.
I can't explain that attraction in terms of anything else that's familiar to you. For example if we say that magnets attract like as if they are connected by rubber bands I would be cheating you because they're not connected by rubber bands-- I should be in trouble if you soon ask me about the nature of the band. And secondly, if you were curious enough you would ask me why rubber bands tend to pull back together again, and I would end up explaining that in terms of electrical forces which are the very things that I'm trying to use the rubber bands to explain. So I have cheated very badly, you see."
In other words, for some phenomena the only simple examples are themselves instances of that same phenomena. So the only possible analogies are themselves merely tautologies.
I've noticed something less sweeping though similarly absurd with the internet. As more and more of people's daily lives depend on internet technologies, it becomes more difficult to find modern, simple examples for analogies that don't rely on similar internet technologies. So someone who wants to explain the wonders of packet switching compares it to long-distance telephone calls, but they then spend the bulk of that time explaining long-distance phone calls to people who have never used a wired phone.
This actually came up for me at the office. I was asking a bunch of questions about the Z transform and the Fast Fourier Transform. The person I was talking to said, "Hey, just call the function in MATLAB, it doesn't matter how it works, just that you understand what it is saying."
All of my life I have rebelled at this notion. My earliest recollection of running into it was when I was in grade school and took apart three wind up alarm clocks, each more carefully than the one previously. My Mom was curious what I was looking for and I told her, "How does a clock know how long one second is?" She didn't know, and I didn't know, and while I had mastered using a clock and accepting that it would go off when I set it to go off, I didn't really "know" how a clock worked until I had taken apart and identified, (and modified to validate the identification :), the escapement.
The question, of course, is how well this experiment has succeeded. My own point of view which, however, does not seem to be shared by most of the people who worked with the students is pessimistic. I dont think I did very well by the students. When I look at the way the majority of the students handled the problems on the examinations, I think the system is a failure. Of course, my friends point out to me that there were one or two dozen students who very surprisingly understood almost everything in all of the lectures, and who were quite active in working with the material and worrying about the many points in an excited and interested way. These people have now, I believe, a first rate background in physics and they are, after all, the ones I was trying to get at. But then, The power of instruction is seldom of much efficacy except in those happy dispositions where it is almost superfluous. (Gibbon)
Richard P. Feynman, 1963
Note that by his own account, most of his students did not do well. James Gleick's biography of Feynman, Genius, has a longer discussion of the disappointing results of his lectures to undergraduates at Caltech, many of whom reportedly stopped attending the lectures as they were not getting anything useful out of them.
That Feynman in fact had difficulty explaining freshman physics to the highly qualified students at Caltech surely does not indicate he did not understand freshman physics.
Some topics are simply very complex. It is not clear that they can always be conveyed in simple terms. In some cases, a "big picture" explanation may be possible but the details remain complicated. In some cases, a hand-waving analogy to some everyday phenomenon may create the illusion of understanding but be misleading or wrong.
To give a specific modern example, a state of the art video codec such as H.264 is extremely complex, built of many complicated components and sub-algorithms. While it may be possible to explain the big picture in relatively simple terms, the detailed implementation and operation is not simple. The inability of someone who creates or implements a video codec to explain it in simple terms to a layman is not an indication that they do not understand it.
Some people look at advanced mathematics or physics and wonder why it has to be so complicated and so full of jargon. It's complicated because it is. The jargon, believe it or not, is mostly an attempt to make it easier to communicate. It would be very, very difficult to wade through these ideas without introducing new words with precise definitions.
Then again, John von Neumann said, "In mathematics you don't understand things. You just get used to them." So maybe the title is true for trivial reasons after all.
I'm not saying I'll take it as literally true in every situation. But what I love about the quote is that it sets the bar for "understanding" very high.
People sell themselves short on understanding - they reach a certain level and are satisfied that they understand something, when there is actually much deeper understanding to be had. For example, being able to write a proof of a theorem can be very far from understanding why it's true, but even mathematicians sometimes pretend it's the same.
So I like that this quote challenges us to understand things more deeply. And more often than not, I find it rings true.
(A basic example coming to mind is the determinant of a matrix. Can be explained in simple terms to children (at least the key idea), or in confusing terms to freshman linear algebra students....)
It is also dangerous to assume this, because that is exactly how we reached the "my uninformed opinion is as valid as your years of experience" aspect of the current political climate. NO, things are NOT as simple as you think they are just because you saw it in the space of a tweet!
On the other hand, it is important to recognize expertise over bullshit. The easiest defense is having several experts, since at a certain point they would need to do an awful lot of collusion to just make things up between them (i.e. if enough of them agree then what they say is apparently correct).
And there are three kinds of explanation:
So sometimes, you understand something visually, or mathematically, but you are forced to put it into verbal terms (say, over a text only channel, or voice), and then you may seem not to be able to explain it even though you understand it.
We've started doing Explorable Explanations / Animated Explainers, here are some we've done and some that others have done:
- Explaining how GIT works: http://gun.js.org/explainers/school/class.html
- How neurons work: http://ncase.me/neurons/
- How end-to-end cryptography works: http://gun.js.org/explainers/data/security.html
- How gerrymandering works: http://polytrope.com/district/ (by a friend of mine!)
- How sorting on partial data / data streams works: http://gun.js.org/explainers/basketball/basketball.html
And more! It is possible, it can be done. But it is hard. That is no excuse for not trying though. Big shout out to Bret Victor's work for starting a lot of this, and thanks to Feynman for encouraging and practicing what he teaches.
I think the effort involved in trying to come up with a serialization causes us to more carefully examine our models, which usually improves them.
But I don't think the lack of a good serialization implies the lack of a good model.
Explaining something in simple terms does not mean you _fully_ explain it. You explain the essence (or what you see as the essence) of the thing. Google search is: you type a question into a box and Google shows you the best answer. Google search is a lot more than that, of course, but if you can't "boil it down" you don't understand it.
This is the top line of a git commit v.s. the comments you leave in the source code. You can spend months working on thousands of lines of code, bur if you can't describe it in a single sentence (while leaving a lot out!) it's a bad sign.
It is amazing how rarely people can get it across to me in basic terms. In fact even the idea of breaking it down into non technical concepts seems to be surprising and alien to many people.
I really admire those who can.
The Feynman Lectures are now on Youtube, and I like to watch them (all of them) every few years. I highly recommend that if you've never seen them, you take some time and watch them- really watch them. Close the other windows, turn your phone to do not disturb, and really watch these masterpieces of education.
And in my experience, the harder the subject, the more informally experts speak. Partly, I think, because they have less to prove, and partly because the harder the ideas you're talking about, the less you can afford to let language get in the way.
Informal language is the athletic clothing of ideas.
It doesn't work for biology, which is complicated at the bottom. Evolution doesn't have the parsimony of physics. Nor does it have to be understandable by humans.
Whether it works for software is a design issue. It's certainly possible to create software which cannot be explained simply.
And an underappreciated corollary is...
If you want something explained well in simple terms, you have to find someone who understands it deeply.
In the sciences, that means someone who has it as their research focus. Because as you move away from that focus, understanding rapidly becomes ramshackle. Leave someone's subfield, and you might as well be talking with a random graduate student (in that field). And that's hopeless.
Thus many research talks have videos and stories which would nice to have in a K-12 classroom. And most all K-12 education content is incoherent wretchedness.
An old essay of mine: "Scientific expertise is not broadly distributed - an underappreciated obstacle to creating better content" http://www.clarifyscience.info/part/MHjx6 In which a 5-year old with finger paints wants to paint the Sun, but encounters astronomy graduate students.
"I am sorry for the length of my letter, but I had not the time to write a short one." - Blaise Pascal 1657
There's a sad little genre of low-quality science education research that goes: "I tried to teach topic T to students of age A. I taught it <really really badly>. Surprisingly, that didn't work! I've reach the obvious conclusion: students of age A are developmentally unready to learn topic T."
But understanding, while necessary, is not sufficient. At PhD poster session practice, it's often remarkably hard to help candidates develop an "elevator pitch". To clearly understand the core of what they've spent the last n years working on. I'm still amazed by how often one gets something like "wow, now I can explain it to my parents".
"But if you can ONLY explain something in simple terms, you still don't understand it"
Many think they understand something, when really they only know how to use it. For example, I understand how to use a computer, but that doesn't mean I understand how a processor works at the level of registers and assembly language. So if I were to try to teach someone a computer, then I could say things like "Click that, and this will happen," or "Type such and such, and then this other thing you want will happen." But if anyone asked me about how that actually works, to follow all the way how a physical mouse-click gets transformed into a change in the window on the screen, then I couldn't. Or, even if I could, it might take me half an hour to explain it, depending on how much they want to know.
So maybe it's that we undestand things, but at different levels. Few people understand something at its deepest level. In fact, physicists would say no one does.
For example, a lever seems conceptually simple, but to create a lever in the body is extraordinarily hard. The joints have to be solidly connected and free to open or close. The direction must be precise and rotation must not wobble. There are so many things that can err and lots of places for force to leak out.
I think there are too many times when people affect a tone of authority and expertise and hide their lack of understanding in verbiage and complexity while making excuses for their inability to explain it to the layman.
Monkey eat => Monkey live.
Monkey live => Monkey eat.
Monkey not eat => Monkey not live.
Monkey not live => Monkey not eat.
Here "=>" is used as in "implies"/"because". The last statement is weird. There are more ways for monkey to "not live" than to "not eat".
Not being able to explain does not imply not being able to understand. Not understanding surely implies not being able to explain.
Correlation, Causation, get it ?
If you can't explain it to a six year old, you don't understand it yourself.
Often, it takes a lot of awareness of what are the common mental models / mental blocks other people have when learning the concept you are trying to communicate. You have to structure things as a series of strategic progressions before tackling the most complicated form of something, all of that is more the art of teaching ( which of course requires good understanding )
Of course, if someone can do that, it's a brilliant proof they do understand something.
If they can't do it, then it can leave you with doubt what someone else understands. Which in Apples case may be considered entirely unacceptable.
The main issue when explaining concepts (especially maths concepts) is switching from one formal context to another, deciding what details to omit, and determining what rules in both contexts should be treated as analogous.
Think of a translator. He/she/it needs proficiency in two languages to do a proper translation. Lacking a second language precludes translation. But it doesn't affect mastery of your native tongue.
A popular question to qualify for engineering job interviews is "describe in simple terms what happens when a user accesses a website on the Internet" - The question doesn't give any info on who the target audience is so you never know what level of detail you're supposed to go into. Because this is an engineering question, I tend to go into more detail but after a certain level, you can't really keep it simple because the reader has to understand what things like cache are... Else you will spend 20 pages just writing definitions.
Skeptic: I understand X. Ive spent years working on it, and Im recognized as an expert in the field. but I cant explain X in simple terms.
Believer: Well, then you obviously dont really understand it. Can you prove to me that you do"
Being able to explain things in simple terms is a skill in and of itself. Many people do not possess this particular skill, but that does not mean they are unable to understand any subject.
Eg. IP = A address like your home address. So the internet knows where to search. We use zipcodes, the web uses numbers.
Then: I need to adjust a dns-record with our IP. Becomes, I will point the website to our address.
If it's not obvious, then all my previous clients are lying ( just mentioning it, cause it's possible)
She'd gone to Caltech. That was on her resume. So I asked her if she'd ever taken a class from Feynmann. That was actually unlikely but she had sat in on a seminar with Feynmann once. She said he could explain the most difficult material and that you would understand it. You would understand it walking away and this would last about 15 minutes during which time you confuse yourself.
Sadly it's 2017 and the popularity of TEDTalks make the laymen think otherwise.
Take for example legal concepts like securities law or environmental regulation. Yes, you can "simplify" an explanation of the Securities Act or the Paris Accord enough to fit them into a tweet, but you lose information necessary to formulating a full understanding.
If you're trying to have an informed debate about policy adoption, the details matter.
Opposite example: Simplify how walking works, and make sure to include the critical systems such as major muscle groups, stabilizers, vision, inner ear, thigh/knee/pelvis/hip construction, the curved spine and its connection to the head, and blood pressure flow/regulation.
The Buddha and the Upanishadic seers were exceptionally good with explaining complex phenomena in simple terms.
Apparent sophistication is a sign of a confusion. Clarity is an evidence [of deep understanding].
Nature is vastly complex but not complicated (a few fundamental laws at work). Only simple things work.
If you hold a complex idea in your head translating that into English can be difficult because part of the process is removing/altering information to fit into existing notions. That is why buzzwords are popular they can take an idea and put it in a relatable concepts for the masses.
In fact, "if you make this fallacy, you're a terrible human being" (which is sarcasm here since this very statement includes the exact same fallacy)
And to be fair, it's pretty rough sledding even when you understand the operators involved.
So, no, you cannot explain everything in simple terms. But you can find sweet spots when trading brevity for accuracy.
1. be familiar to the logical reasoning : what implies or even for any x means. 2. know the relevant set of theorem and axioms used in the demonstration.
You could probably illustrate what a mathematical result implies in some real-life example, but you won't be explaining it.
Quantum physics is a really good example of this, because it's not that difficult to understand if you look at it with the mathematical PoV : it's basically linear algebra in infinite dimension, you have vectors (in the space of functions of |R) and linear applications on these vectors (with all properties of such applications, like eigenvalues and eigenvectors), etc. But if you try to explain it in simple terms, you're going to distort the reality to fit in the macroscopic-scaled human representation of the world and you'll probably say things that won't be true.
Whatever is well conceived is clearly said,And the words to say it flow with ease.
Ce que l'on conoit bien s'nonce clairement,Et les mots pour le dire arrivent aisment.
I understood very early in life that if I cried I would be hit. I couldn't talk, write, or communicate my understanding in any way, but I understood clearly.
Remember that "in simple terms" does not mean easy or over simplifying something. To me it means making a to-the-point and jargon-free explanation.
I've witnessed dozen of people try and spectacularly fail at teaching their own language.
Person B: How?
Person A: You're a dummy! There's mountains of evidence!
Person B: Like...
Person A: You're killing the vibe brah.
In the words of the xkcd on the subject, (check the title text):
"Actually, I think if all higher math professors had to write for the Simple English Wikipedia for a year, we'd be in much better shape academically."https://xkcd.com/547/
If you're already running a trusted Debian system, then install the debian-keyring package. Packages are signed and verified, so those keys don't need further verification.
Otherwise, fetch the keys in  with gpg:
$ gpg --keyserver keyring.debian.org --recv-keys <...> # e.g. 0x6294BE9B
$ gpg --fingerprint
Finally download the checksum and their signature files, and verify their signatures:
$ gpg --verify <...> # e.g. SHA512SUMS.sign $ gpg --no-default-keyring --keyring /usr/share/keyrings/debian-role-keys.gpg --verify <...> # if using debian-keyring package
From what I read  Debian 8 will be supported until April 2020 and Debian 9 until June 2022.
So in 2020 I will have to decide to either switch to Debian 9 or to Debian 10 which probably will be out by then. Is that correct? My feeling is that it might make things easier for me to skip Debian 9 and go directly with Debian 10.
I did the same with 7. My server used Debian 6 until I switched to Debian 8.
> If you use debhelper/9.20151219 or newer in Debian, it will generate debug symbol packages (as <package>-dbgsym) for you with no additional changes to your source package. These packages are not available from the main archive. Please fetch these from debian-debug or snapshot.debian.org.
No more shipping -dbg packages with full binaries. And less storage space is always a win.
This is surprising though:
> Python 2.7.13 and 3.5.3
I thought 3.6 was in Stretch out of the box. Why 3.5 only (especially on a LTS)? :\
Although Alpine Linux is my personal choice.
Edit: the uploads are complete, v1.2.0 of debian9-amd64 and debian9-i386 are released.
If there is user demand for it, we can look into vmware boxes, and possibly hyper-v too.
Apologies if anyone feels this is off-topic/opportunistic - AFAIK all other Debian 9 boxes on Atlas target Virtualbox only, and while projects like Boxcutter (which we forked from) do support Parallels/etc, they aren't always the quickest to produce new boxes.
How well is Chromium supported on Debian?
I like it as a secondary browser for its excellent support of multiple profiles but I run Ubuntu and had to switch to Chrome because Chromium doesn't seem to be updated promptly.
One thing that is new in this release is the availability of mod_http2, for Apache. I'm looking forward to seeing if that will increase the response-time of my various websites.
In version 45 (released on March 8, 2016)
I suppose the idea of reducing freeze time with "always releasable testing" didn't really work out (lack of resources?).
Sid (Debian unstable) is named after the guy that breaks the toys.
In the past Debian was considered to be one of the most stable Linux distributions available. Stability and quality was a priority above anything else. However, around 2014 something changed when systemd was forced into Debian in a way that would never have happened before the new generation of developers took over the project.
Maybe this is just something we have to get used to, young developers seems to value ease above quality and stability, this also explains the current flood of Electron apps.
The inputs seem to be road line recognition, optical flow for the road, and solid object recognition, all vision-driven. Object recognition is limited. It doesn't recognize traffic cones as obstacles, either on the road centerline or on the road edge. Nor does it seem to be aware of guard rails or bridge railings just outside the road edge. It probably can't drive around an obstacle; we never see it do that in the video.
This looks like lane following plus smart cruise control plus GPS-based route guidance. That's nice, but it's not good enough that you can go to sleep while it's driving.
"Please note also that using a self-driving Tesla for car sharing and ride hailing for friends and family is fine, but doing so for revenue purposes will only be permissible on the Tesla Network, details of which will be released next year."
Autopilot UpdatesWe just released the latest version of Autopilot. You can now experience Enhanced Autopilot features including Traffic-Aware Cruise Control, Autosteer, Auto Lane Change, Parallel + Perpendicular Autopark, and Summon. Automatic Emergency Braking, Forward + Side Collision Warning, and more advanced safety features are also active and standard.
All Tesla vehicles have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver. And Tesla vehicles continue to improve with over-the-air software updates, introducing new features and improving existing functionality to make your vehicle safer and more capable over time.
My theory is still that the demo video is actually from Nvidia's SDK and the actual autopilot they deployed is totally different and not actually in the 'self-driving' category at all at this point.
But they are very aggressively rolling out updates and new features for more autonomy and yes they do intend to push for a complete door-to-door self-drive ASAP, ideally before the end of 2017 (at least as a new alpha version they can demo). Otherwise they would not sell it as such. But they do not plan to take another year to get there, based on Musk's tweets and the fact so many already paid extra for a full self-driving ability.
There a few new features that my AP1 might not have like Perpendicular Autopark, but I won't know till I get it back. From what it seems it's just gotten to the level that they were with with the previous generation that was developed by or in conjunction with MobilEye.
I think they will need a hardware revision for actual full self driving perhaps 2 years away.
This is a statement of intent, and production vehicles are a long way from having software that enables this.
The amount of objects for detecting and avoiding will be way too high.
The tests shows almost clear conditions for driving. This should be tested on streets of NY or a busy city like Mumbai
I think one big selling point of cars has always been that they grant the usera great amount of autonomy (unprecedented, in their time, taken for grantednowadays). You can ride your car and go anywhere you like! The cost of thatautonomy of course is that some of us will be killed or maimed in roadaccidents, because you can't give silly little monkeys autonomy behind thecontrols of big powerful machines without death and carnage ensuing.
Self-driving cars propose to reduce this risk of death and injury by takingaway the autonomy we traded it for in the first place. What remains would bejust a mindless automatic system carting the user to and fro. Well, in thatcase- we don't need to wait around for full level-5 autonomy. We already havedumb machines that can do that: trains, trams, all sorts of vehicles-on-rails.
Why do we need self-driving cars, then?
Answer: we don't. And I haven't for a moment believed that any of this isanything to do with road safety. Note that nobody even discusses the other 900pound gorilla in the room: pollution.
Guess what? Taking cars off roads completely would also reduce air and noisepollution tremendously.
That claim is strong and false. What about Roadster and the old Model S with the old AP1 hardware?
I wonder what the current status is, both in terms of software validation, and regulatory approval.
And with that, this study is bullshit.
Human beings don't listen to linear sine sweeps. We listen to music. Recorded music has 8+ octaves of frequency range (the bottom octave plus a little extra is almost always rolled off in real-world recordings, to ease stress on downstream components that can't reproduce such low frequencies anyway), and 20-50db of useable dynamic range.
Sine wave measurements of audio gear ignore impulse response, intermodulation distortion, phase shift, and a host of other real-world physical device responses to real-world musical signals. Scientific, reductionist thinking is inadequate to get an accurate picture of the factors that matter to human listeners.
Frequency response and total harmonic distortion aren't measured in these cases because they're useful or relevant. They're measured because they're easy to measure. It's like looking in the wrong place, because the light is better there. And the results? It's like measuring a car's performance by how well it can drive in a straight line at 60mph. Acceleration, braking, and turning are too hard to measure, so we ignore them...
I'm a musician and record producer. I've engineered and produced numerous albums, and rely on multiple different types of headphones for different purposes. The article's claim that one headphone can be easily morphed into another through mere equalization is, frankly, bullshit. The two headphones I rely on the most (Beyerdynamic DT880 and AKG K240) sound wildly different. Neither is "accurate". Neither are the Tannoy System 12 DMT midfield studio monitors I use for mixing, or the stock Subaru car speakers I use for reference to check the mixes from the Tannoys.
Audio reproduction is incredibly complex and difficult stuff. Trying to isolate one factor and saying "That explains everything!" is bad thinking.
It's really cool hearing what they heard in the studio control room for the final mix. And often surprising.
You can get a range of other precalibrated pro audio headphones or correction profiles from sonarworks.
Consumer headphones are just silly IMHO. Artificially boosted frequencies with prices up to $400. A set of precalibrated MDR7506's is around $220.
If you don't care about truly flat response with correction, you can get a set of AKG K240's for $100 bucks and they're super comfy, amazing sound and loved universally by audio pros.
- Someone with online alias NwAvGuy put the whole AV industry (ok maybe not the whole, but some big players) in a loop by showing in online forums that a totally inexpensive DIY DAC (with a free design he/she shared) could be built with quality rivaling elite products worth thousands of dollars.  (well a hazy version of the story goes that he/she exposed various audiophile review sites and forums as being full of sponsored reviews, and that eventually lead to his/her ban from head-fi.org I think)
- As for capsule mics (commonly known as condenser mic), market is flooded with DIY designs and DIY kits which let you build/buy one for $200-$400 (the dominant cost being that of the capsule itself) that will rival the quality of multi-thousand dollar mics. They go by the names Neumann clones, etc.  (no affiliation), .
In retrospect, and given the shady things AV sellers do, like trying to sell you a USB or HDMI with gold-plated pins, claiming it to be superior, it should come as no surprise.
Though, no offense, but audiophile consumer base is filled to the brim with hipsters who judge the quality of a product by its price (and some of the "experts" were busted after they failed blind tests; I think opus vs flac, I'm mixing a lot of things now).
Headphones also have a serious empiricism issue. You can probably pass off one high end Sennheiser for another in an A/B test. But you couldn't pass off an Audeze for one and have a valid A/B test. Also, you will often read or hear an expert say, if the measurements say something is bad, but it sounds good, or vice versa, then it means we're measuring the wrong things. I'm not saying that the Harman response curve isn't valid. It's just not the whole story.
tl;dr -- Buy the cheapest headphones that you really like, and ignore whatever your coworkers say. ( Hell, there are actually Beats that are good headphones! https://www.innerfidelity.com/content/time-rethink-beats-sol... )
Things are going to change in significant ways in the future as the price of signal processing, compensation, and active correction drops, however. Combining those with advances in the cheaper manufacturing of better drivers will result in the headphones of 10 years from now making the high end headphones of today seem "meh" and today's typical headphones seem trashy.
We need objective benchmarks for everything. Especially when marketing is growing bigger each year. Even "Tech websites" are biased and not objective anymore.
It's very easy to say, "I can hear so much more of the song out of my ATH-M50's than I can a pair of Beats", and you may be right. But something objective to back it up would be great, too.
This is a silly assumption, and easily explained.
1. Most headphone purchases aren't and cannot be made by comparing sound quality. Reviews of sound quality are so universally understood to be subjective that most consumers probably ignore those details.
2. There is no one subjective or objective standard that is meaningful for all listening material. Podcasts, modern pop music, older pop music, classical recordings, television shows, and movies all have wildly varying acoustic profiles between and among each genre.
3. The vast majority of headphones have Good Enough sound quality for the vast majority of consumers. Sound quality is highly unlikely to be the primary reason most consumers buy a set of headphones, and it's unlikely to be the reason they are dissatisfied with certain headphones.
4. Headphone design, form factor, build quality, fit, feature-set, and even color are all much more important factors in terms of consumer satisfaction with headphones. They are, after all, a highly noticeable part of your ensemble. They are intimately in contact with your body. And you want them to work without thinking about it too hard. In addition to being more important, most of these factors are far easier for consumers to judge between headphones than sound quality, so again it's no surprise that an arbitrary single standard of sound quality would fail to correlate with perceived value.
In other words, this is silly for reasons that have nothing to do with technical arguments about actual sound quality, whatever that means.
Price is correlated with perceived value, which includes quality, brand recognition, brand opinion, current style, and a long list of other factors.
(And, yes, this is a horrible use of the word 'correlated.' 'Derived from' or 'based on' would be much better.)
Indeed, they did find a significant difference in magnitude response _error_, although the effect was quite small.
"Nevertheless, assuming that the perceived audio quality is largely determined by the spectral magnitude response of headphones..."
This is a very wrong assumption.
Audio component designers have more or less a hard time picking up which measurements can correlate with audio quality. And frequency response measurements using sine sweeps, like in the cited study, are almost of no value for discriminating between two transducers (headphones, speakers) with regarding to 'audio quality'.
Also, the fact that one headphone can extend beyond 20KHz or that it can go below 20Hz will give zero guarantee of better audio quality.
Frequency response measurements using white/pink noise can give a slightly better hint because they can take a look at resonant peaks that might be annoying to the listener, but even this is not a law set in stone*
* Impulse measurements (and waterfall plots) can give you a clearer idea of how clear is the sound going to be; but then you can have a transducer with a fairly good impulse response but a slight resonant peak somewhere --- OR you can have sometimes a transducer which shows pretty flat frequency response but bad impulse response.
A good test for intermodulation distortion (the big white elephant in the audio room) will REALLY give you a hint of which headphone will be least annoying to the ear when listening to loud complex music like classical music, vocal music, etc.
It seems that the article has been written by experts in acoustics, but not really in "audio".
TL;DR: Freq response measured with sine sweeps can't really tell you anything helpful to discriminate headphones with regard to sound quality.
Of course all this is confounded by the fact that music will tend to sound best on speakers/headphones with a response curve most like the speakers/headphones that the mastering engineer used (or more accurately, the set of speakers/headphones that the engineer compromised among). You will probably tend to have the best experience listening to music with the popular devices within a given musical subculture, because mastering engineers will be targeting those devices.
I find little fault with the arguments laid out supporting the paper's thesis.
For those commenters making the jump to "sound quality" (which is not the topic of this paper), the quoted observation above conclusively proves that these headphones have differing tonal qualities. Even a casual listener will be able to hear a difference of 5dB in the critical freq range of human speech.
 In terms of music quality. Other use cases may prefer designs that focus on other features.
I could quantify that, but why bother unless I'm getting paid a hell of a lot to to do it? I don't see anyone here who's championing this naive approach offering to pay for a study designed by an experienced professional, so don't complain about a lack of scientific rigor if you're not prepared to pay for it. I prefer the more concrete feedback of people telling me it's the best soundtrack material they've ever received in post production.
You can talk about the scientific method all you like. I'm very fond of the scientific method. But rigorous testing costs money. If you're not willing to put your money where your mouth is, then accept the opinion of people who do this kind of thing for a living.
Most consumer audio equipment is a scam. I'd be interested in the subset of equipment from Shure, AKG, Sennheiser, Sony, Beyerdynamic where the design was actually intended to produce a broad frequency range correctly.
What's better: speakers that go to 40 kHz, but have a big dip at 4 kHz, versus ones that go flat to 15 kHz and roll off after that?
The application allowed you to benchmark headphones in real-time, revealing "how accurately" your music was being recreated; you'd pit two headphones against one another: clash of cans!
Ultimately, yeah, there's the uber-uber high-end, the really clear low-end, and a +-$900 muddle of everything else.
I like my Sennheiser HD600's (and MDR-1000x for the office) which are $300 headphones, but equally happy to use Superlux HD-681 EVO or Soundmagic E-10 which cost around $30
Isn't consistency an important characteristic of a headphone? Perhaps even more important than some ideal frequency response. You want the same sound every time you listen to a song, you don't want it to vary.
What industry convinces you to buy things you do not need? Advertising
The idea that a particular frequency response is the thing that separates good headphones from bad is ridiculous.
I know some low end headphones add weights to increase "luxury feel". It'd be interesting to see some research about when adding weights stops.
No correlation would mean that if I bought a random headphone that cost $2 (they exist, you can go to ali express right now and put in a maximum of $2 in a headphone search), and a random headphone that cost $500, then if you had to make a bet about which one would come closer to reproducing the bass of a song with a heavy bass, you would be betting even money. It would be a toss-up whether the $2 or the $500 came closer to producing that bass. Because there is no correlation.
Here is an example of correct usage of "no correlation": there is no correlation between a headphone's price and the md5 checksum of its SKU.
I skimmed the paper. A better title (for HN) would be "No correlation between frequency response and price quartile in 283 headphones".
I like old-fashioned ephemera...
Curious, who is the founder of this project? Interested to hear more about it's background and the team behind getting this off the ground.
A couple of questions:
- How come there is no search function?
- Why are authors sorted by first name?
- Do the results of the proof reading get fed back to Project Gutenberg, et al.
- Will readers like FBReader be able to add this catalogue?
So, sounds like a good idea and I hope it succeeds but it's not quite there yet.
You may also be interested in our toolset (GNU-compatible only at the moment, we're working on converting everything to Python but we're not there yet): https://github.com/standardebooks/tools
I'm happy to answer any questions anyone has. We're also more than happy to have new contributors, if you're interested in working on and proofreading a public domain ebook that you've been meaning to get to.
Some of you have mentioned concerns about the modernizations we do. The key word I think is "light modernization". Mostly that just means bringing spelling up to modern standards, and removing a lot of hyphens in words that are no longer hyphenated. A common one, for example, is to-morrow -> tomorrow. Another one we recently added was lacquey -> lackey. Generally we leave punctuation and grammar alone. I liken this to modern books replacing the "long s" character--it's just presentation that doesn't affect the meaning. Modern readers would rather see "successful" instead "uccesful" even though the latter is what was originally printed.
I struggled for a long time with my desire to see older books with modern spelling and typography, versus preserving the intent of the author and original publishers. Over time I've come to realize two things:
1. Many books back in the day were heavily edited by the printer and publisher without the author's input anyway, so you'll get various editions over time that look totally different. Jane Austen books are a good example of this--early editions often have a pathological overuse of commas, while later editions published after her death just remove a lot of them without comment. So when we're producing our own ebooks, we accept that there's a level of editorial discretion involved, and that "the author's intent" was a very fuzzy and often totally ignored topic hundreds of years ago anyway. How can we tell what the author's intent was in the first place, if various printers and publishers have meddled with the editions for hundreds of years already?
2. For those of you who want to read the originals in their totally unedited form, other projects like Project Gutenberg or Wikisource already have those faithful transcriptions for you, and places like Internet Archive, Hathi Trust, and Google Books have the page scans for you. By lightly modernizing our own productions, we in no way diminish your access to the painstakingly-preserved digital editions; we're just adding another option for you to read.
Liberated? That ephemera might actually be integral to the story and you are NOT the arbiters of intent. Please keep your modernizing out of my lit'ratur.
Pap, The Adventures of Huckleberry Finn
I see that the page is a bit slow. If you need any help to port it to a static format (for performance), please let me know.
Thanks for all your work!
Most of the consumers so far have been neuroscience researchers and statisticians, but we do hope (and think) that there's value for a wide variety of interests.
There's a bunch of different data, but the highlights are fMRI scans of people watching and/or listening to the movie Forrest Gump, eye tracking, and detailed annotations of the movie. We are also about to begin acquiring simultaneous EEG and fMRI.
Accessing the data is easy, and, as great admirers of Joey Hess, we also have it available in a git annex repo. :-)
[EDIT] Given that this thread is about open source datasets, it's probably worth mentioning that the license is PDDL.
Unfortunately, Open Source does not help here -- I do not see how OS can be used with data sets. The main OS leverage with software development is that if you use software X to build software Y, X is usually present in some way, shape or form in your deliverable Y. Not so with training data -- once algorithm development is done you can (and usually do) strip training data out and have a finished product that does not require X to run.
Even if one were to require open sourcing derived datasets it is usually easy to segregate the dataset with a tainted (open source) license as you build up your data so the new datasets are not formally "derived" and thus would not need open sourcing.
I would love a better way forward on this, or at least a cleaner explanation of options.
But members break the tools all the time and don't take responsibility for it. Even though there are cameras and people have to swipe their card at the door it still happens.
I think one reason sharing is not as common is because people are jerks.
I remember when I was a kid we used to borrow each other's NES games all the time and never give them back.
I applaud any effort to rebuild our fractured society.
For higher value items, I've been meaning to extend the above apartment-wide setup with a Google Doc inventory of things that people are willing to share, but want where participants want face-to-face confirmation, like loaning a camera or a mountain bike. I wish there were a way (a social institution moreso than a technical solution) to make quick contracts for borrowing things. I'm privileged enough to be able to replace minor things, but I am definitely relucatant to loan big things if I don't know if a friend can/would replace the thing if something bad happened on their watch. And no I don't want to rent themI don't like the cognitive overhead of markets, and that's not the point.
In all seriousness, as others have noted, I see this as a rather damning comment on how badly human contact is getting abstracted away to businesses more and more. It's rather sad that people no longer feel able to just talk to others without some organisation to mediate.
Student loan was shown to be the primary deterrent. Of course what wasn't to blame was that the disparity between median wage and comfortable life is growing. Another thing they fail to attribute this to is that millennials are smarter, avoiding the spendthrift mistakes like large mortgages which tie them down to a place and make them paycheck away from homelessness.
source: Just google 'Millennials aren't buying <insert anything here>'
It's hard enough to get enough time to do something, imagine requiring that time to be in commercial hours, prefaced by a drive or walk somewhere, a talk with somebody, then postfaced by the same. And then you forgot something...
I once lived out of two bags for 11 months. After living in a one bedroom apartment by myself for about a year, I was surprised just how much I had to sell, give away and git rid of. I even tried to keep in my head that I wasn't /really/ buying anything, but basically renting it until I took off again. I always tried to buy used or from thrift stores whenever possible.
Also take the idea to places where people aren't used to having and owning these possessions --get them while it's still a nascent idea.
Still, I find it interesting that he managed to raise $30k on IndieGoGo. It signals that people care about those ideas/ideals.
The problem is that people aren't 'settling down' and instead are frequently moving around for work. This makes it hard to build up communities with the people around you. Your work becomes your 'stand-in' community - which has it's disadvantages. This has ramifications for health and happiness far beyond borrowing stuff. The studies that show people living longest aren't correlated with western health care or even good diet - it's the strength of their bond with their community.
I'm currently doing this with a car for my visiting son. Too young to rent for but not a big deal to buy an old car for 2 months and sell it when he leaves. Basically paid the registration fee for 2 months + gas which is $300 for a 2 month rental. Then the "fun" (which it is to some people) of dealing with cragistlist crazies while selling it. I actually enjoy dealing with the flakes, putting myself in their shoes and getting practice negotiating.
It's as eternal and essential balance as CPU vs MEMORY.
I did grow up on the phrase "he who dies with the most tools wins", so it's taken me some time to transition to rent/loan. But I've got so many tools and supplies now, and I've reached a point in my life where I'd like to do more and own less, and all those tools are now somewhat a burden. I bet I'm not alone and that these tool libraries could probably get a lot of high quality donations.
There's also the Edmonton New Technology Society, which was the original Makerspace in Edmonton. Unfortunately for me, the location means that I'm unable to visit regularly.
You should buy what you need with the rights you're entitled to, and figure out the difference between what you need and what you really don't.
But feeling that way, at least initially, about a startup that becomes big often seems to share that characteristic.
People go camping in Hawaii, buy gear and give it way when they leave.
If I were to make such an app, how could I get a company to underwrite and provide insurance for these items?
"Starting in 2009, the Global Facility for Disaster Reduction and Recovery (GFDRR) and its partners developed GeoNode: web-based, open source software that enables organizations to easily create catalogs of geospatial data, and that allows users to access, share, and visualize that data. Today, GeoNode is a public good relied on by hundreds of organizations around the world ... GFDRRs direct and in-kind investment in GeoNode over the past six and a half years has been in the range of $1.0$1.5 million USD. Partners have also made significant investments in GeoNode; a conservative estimate of these partner investments comes to approximately $2 million USD over the same time period. GFDRRs investment in GeoNode would be a reasonable amount even viewed strictly as a software development cost: the GeoNode software today represents an approximately 200% return on investment in terms of code written, since thh current GeoNode project would most likely have cost $2.0 3.0 million USD if GFDRR had produced it alone as proprietary software, without building an open source community around the codebase."
This is an unusual situation; many people need geospatial databases, and contributing their local data is useful to them. The value here is in the data, not the code. This is more like Open Street Map than a software package.
I'm all about open-source, but I wish people wouldn't focus on how companies should do it because it's good for them financially (although granted that's probably more effective with the intended audience than what I would say). I wish a bigger deal was made about how it's just a douche bag move to sell software and proactively prevent users from having freedom to understand, fix or modify it for their needs - that applies to more than just the source availability and license.
A lot of times I hear the implementation cost is where all the money is so it doesn't matter what the software costs. That is sort of true, but large companies are not incentivized to make it any easier to implement, less they put their System Integrators out of business and/or push them to other vendors. The Open Source community does not have this incentive obviously.
Note that the study does not actually measure ROI from a revenue perspective, but estimates based on theoretical saved costs: The company invested $1M in open source infrastructure and potentially saved $2M in direct development costs (given that the code base is current worth $3M). 
Most interesting takeaway for me is the implications of open source for government funded projects, and a ratification of the idea that contributions of code for some public tool can save the general public tax money. A forward thinking org could try to broker some sort of tax cut based on SLOC contributed to public, government-sponsored projects? Maybe that already exists.
Would suggest studying Red Hat's rise to $2B in yearly revenue to understand how a company takes open source and turns it into revenue.
 GFDRRs direct and in-kind investment in GeoNode over the past six and a half years has been in the range of $1.0$1.5 million USD...GFDRRs investment in GeoNode would be a reasonable amount even viewed strictly as a software development cost: the GeoNode software today represents an approximately 200% return on investment in terms of code written, since the current GeoNode project would most likely have cost $2.03.0 million USD if GFDRR had produced it alone as proprietary software, without building an open source community around the codebase.
Anyway, it looks like an interesting report and I look forward to reading it in more detail, but I think the headline in the blog-pointer is unwarranted.
A more meaningful measure is how quickly you can resolve a problem with open-source for X amount of investment, versus other options. With that, if a package doesn't do what you want then investing nothing appropriately yields NO return; whereas, investing certain amounts of time (asking questions, filing bugs, etc.) may yield more return, and fixing it yourself may yield the most.
The current title, "World Bank-Sponsored Report Shows 200% ROI on Open Source Participation," the contents of this link, and even the World Bank's own blog's title, strongly suggest that this was a World Bank-commissioned study across multiple open-source projects/communities. Note the plural in OP: "to quantify the benefit of contributing to and participating in open source communities." And the World Bank's blog title: "Leveraging Open Source as a Public Institution New analysis reveals significant returns on investment in open source technologies."
But that's not the case at all. As noted in other comments, this is a single community, a single project. Granted, it's a successful one. But we shouldn't get our hopes up about "oh s*, this is an article I can forward to the C-suite to get us to invest in open source!" What we have here is technically accurate clickbait that relies on the brand of the World Bank's analysis. And, in being disappointingly vague, it tarnishes that brand.
Which references link:
Which references PDF:
OPEN DATA FOR RESILIENCE INITIATIVE & GEONODE A CASE STUDY ON INSTITUTIONAL INVESTMENTS IN OPEN SOURCE
Edited: It's interesting to see how comment which states facts, can get upvoted and downvoted this much. Sometimes voting in HN does not make any sense (to me). I understand that upvote is "thanks for letting us know those facts". What are downvotes representing? That I should not write at all, that price increase for all 3 telcos is fine, that everyone should be happy? Rhetorical question.
I can also note that this law has resulted in alot more unlimited plans. I myself have just gotten one which includes 30gb of roaming. Is it cheaper than before? Hell no. Do I have to care about how much I surf, when or where? Not anymore - and freedom is worth the extra 20.
The frequent travellers (presumably wealthier) get subsidized by the infrequent travellers (presumably less wealthier).
so yes receiving calls or using internet will be same as home, you still have to watch what number you are calling, if it's from country of your carrier or different
please correct me if I am wrong
EDIT: so I was right, now it's even more insane than before:
For example: If you have a Belgian card and you travel to France and call either a hotel in France, back home to Belgium, or to any other country in the EU and the EEA, you are roaming (refer to legal text on the regulation on roaming) , and you will pay Belgian internal domestic prices (refer to legal text).
However, if a Belgian SIM card holder calls from Belgium to Spain, she/he will pay the international tariff. Calls from home to another EU country are not roaming and are not regulated.
TLDR: there are no fees for international calls while you are in roaming, but when you return back home enjoy fees for international calls, using your SIM in foreign network is cheaper than using it in your own network at home
Surely some more central regulations will remedy the situation! "To each according to his needs." What could possibly go wrong?
So if you want to use your phone in a different country during the holidays, you'll need an EU roaming subscription.
The politicians once again failed to be sufficient precise in formulating a law that would produce the decided result. They should have added a clause that state that all subscriptions are to cover the entire EU.
on Three I've had free roaming for years, at no additional cost, across the EU and a good chunk of the rest of the world
eg people who never "roam" are going to be subsidising those people that do.
A negative move spun as a positive... clever EU, clever.
Worse still, with ~3000 citations, Dworks Differential Privacy (ICALP (2) 2006: 1-12), should rank even higher in the Theoretical Computer Science list. But Google Scholar has completely lost track of that foundational paper; its got it all confused with a completely different paper, Dworks 2008 Differential Privacy: A Survey of Results. Note that this also means that anybody searching for the general topic differential privacy on Google Scholar will not get to see the most-cited paper about it! https://www.microsoft.com/en-us/research/wp-content/uploads/...
Disclaimer: Dwork and I have been seen together, for 24 years.
This almost sounds like collecting my most liked pics from 2006 on Facebook and creating an album "Best moments of my life".
Do they not have data before 2006 ?
For more papers, there is a nice list here: http://jeffhuang.com/best_paper_awards.html not limited to 2006
There is a bunch more places to get papers listed here too: https://github.com/papers-we-love/papers-we-love#other-good-...
I'm thinking about research versions of Lord Kevin's favorite edict: "Heavier than air flying machines impossible" or the patent person (examiner? head of patent office?) who in the nineteenth century said everything that can be invented has been invented.
This... doesn't seem like a very representative selection of 'timeless' papers.
Things that had a major impact on the problems they focused on which many other papers doing something similar built on or constantly referenced. I'm skeptical of citations in general since those who chase them usually do a high number of quotable papers in whatever fad is popular instead of hard, deep, and critical work. Those I listed are the latter with who knows what citations. The collection is probably still nice for finding neat ideas or just learning in general.
no, a collection of titles. a collection of papers would be very useful; these are just links, e.g., to paywalled sites.
> Employing homomorphic encryption techniques, PIR enables datasets to remain resident in their native locations while giving the ability to query the datasets with sensitive terms.
I can imagine a few scenarios there. One perhaps is when db admin should not find out what someone, possibly working on a classified project is querying.
Or say one compartment / project collected the data and now they want to share it with another project. Those read into the second project don't want to reveal to the first one what they are querying because it would reveal classified information.
Another scenario is a database which has results of possibly illegally intercepted communications. If the NSA can argue that the Constitutionally defined "search" doesn't occur until someone actually performs a search (as in runs an SQL query over the data). Then having PIR capability means being able to break the law but only let as few people as possible do it.
Also https://github.com/redhawksdr is pretty damn impressive. It looks like a complete parallel implementation of GNU Radio. Completed with an IDE and such. Wonder how it compares?
Accumulo (a popular NoSQL distributed key-value store)
Apache NiFi (data processing system)
It looks like the last commit was over a year ago, though. Is there information I'm not seeing of whether these projects are actively maintained (or still in use at NSA?).
I'd be very interested in more public cryptanalysis of this. It's a damn simple cipher to implement, and if it were at least as secure as say Salsa20/12 it'd be very nice for all kinds of applications.
Also interesting is splitting the repos: that the NSA and IAD have different repos, and that one seems focused on defensive tech while the other is publishing analysis tools.
I know there's a lot of people who aren't fans of the NSA (or what they do), but I think most of us can see a need for a military-grade organization to research defensive technologies for helping secure our infrastructure. I don't think many of us would be unhappy with the NSA if that's all they did. (Or phrased another way: most of us are unhappy because of how they conduct intel work or compromise defensive capability for offensive ones, eg, that whole business with ECC.)
So I think it's important to respond positively to things like the IAD github page, even if we're not fans in general.
It was literally a one letter change in the README file, but I still have the privilege to call myself the very first civilian to contribute to the NSA's open source project: https://github.com/NationalSecurityAgency/SIMP/pull/1
It took me about ten years to figure out that Twitter is a cesspool of useless noise and ego. Everybody tries to outdo each other with noise and follower count. What Reddit does right is focus on topics, primarily, not personalities. (Although I actually like the new user profiles, since they tend to be secondary focus).
Twitter could have been something different, and I think that expectation for something more got priced into what it is valued at today. Based on Twitter's current market cap (12.12 billion dollars!) it's already overvalued by a LOT; and there's really nowhere else for it to go but down. Any new users it gets are just bots or other political warfare tools.
For me, the final straw was that Twitter wouldn't "verify" Ecosteader as a legit account. So I deleted my Twitter accounts, sold a small investment I'd opened a couple years ago, and now spend more of my idle time reading Reddit rather than Twitter. And I feel so much better for it...
A subreddit is far more useful than a hashtag... it has stay power, searchability, and (like Twitter) is the kind of place where people will vent and where companies can interact with customers / users. The key for Reddit, I think, will be to do what is right for its users to achieve information awareness... Conde Nast is a news platform, after all. Let's just hope they don't let themselves go the way of Yelp.
At about the start of 2017, Reddit saw a noticeable increase in the activity growth rate, which investors love (although the biggest chunk occurred around October/November 2016, due to the U.S. Election)
And here's the BigQuery to reproduce the aggregation:
#standardSQL SELECT DATE_TRUNC(DATE(TIMESTAMP_SECONDS(created_utc)), MONTH) as mon, COUNT(*) as num_submissions FROM `fh-bigquery.reddit_posts.*` WHERE (_TABLE_SUFFIX BETWEEN "2015_12" AND "2017_04" OR _TABLE_SUFFIX = "full_corpus_201512") GROUP BY mon ORDER BY mon
Sometimes they do that under the covers, so to speak, and some users don't like it (e.g. hailcorporate) but often they will promote their products transparently, and often normal users don't mind. The most obvious example is Netflix on /r/movies. Multiple employees were observed continuously posting things to the site, via submissions, comments etc and users liked it.
By introducing a charge for these corporate users they can reap in a substantial income. Whether it would make sense for them to label these corp users as such is another option, but they certainly know about them. I find it interesting that most normal users find that they don't mind interacting with paid marketing employees and that they consider it organic and natural, very interesting. I also find it worrying.
Reddit is only a thing because people feel safe there. What other site of its user base does that?
I'm surprised Reddit stays so popular. I stopped using it ~8 years ago. The habit didn't stick with me (beyond a year or two).
Their struggles with advertisement is something they can actively address now. Before when they were smaller than digg once digg made a huge mister and alienated their users this allowed to flourish into the site it is today. Due to the fear of what occurred to Digg their advertising was less aggressive their advertisements have been less aggressive and attempt to be targeted. But, since reddit is so large now the risk of alienation is much less. Also, other condenders like imgur and voat have tried to take the throne from reddit without success. Also, they have made great strides in making it accessible to everyone and you can get anyone from any political spectrum. If reddit wants to make money they have to broaden their appeal greatly and I believe thats the purpose of the $150m. Hopefully when they do this they won't turn out like digg because they are a much larger network now.
Disclosure: Worked in this space 10+ years.
YouTube is the only one I see who is really trying to make it work. Instagram worked it out by placed ads for the most popular models.
And a key sin of all these sites is they think the content is theirs. It's not. Stop trying to regulate it to be what you want. It is what it is and you should be thankful, very thankful, they chose your site to host it. But this doesn't seem the case. The reddit admins in particular seems to hate a large part of their content-creating user base.
I'm (obviously) not the target market, but I absolutely detest disingenuous behaviour like this.
1. They have some weird policies about how they want their employees to work. At one point of time they were ok with being remote, then they moved everyone to SFO or told them to pound sand. 2. Scaling issues: Seems like that's an after thought and only gets addressed "when it happens"* (Never thought put in after the fact) 3. The Modmail.. it's bad when you're dealing with lots of it. There are features completely lacking. (Like searching, or a CRM for users and how they contacted the mods) 4. The non-obvious spam.. it's gotten worse now that they took down r/spam.
Keeping in mind this has remained a mystery to me ever since Facebook didn't sell for a million dollars an eon ago. Facebook's founder is today worth some ridiculous sum of money. Why is that? I'd have sold Facebook for a million dollars and then just made another website. What am I missing. YouTube sold for some amazing sum ages ago to Google, who has from my understanding still not turned a profit with it.
Since I am clearly so out of touch please make the explanation easy to understand.
It would be interesting to see some potential marketplace addons or payment processing.
Thus reddit could long term replace or absorb Etsy and maybe maybe Craigslist.
Steemit "Your voice is worth something"https://steemit.com/
Of all the supposed reddit-killers that have been put forward, this is notable because of the funding mechanism, and because everyone there seems so damn excited. However, a lot of the most popular articles seem to be about... Steemit itself.
I know reddit gets a lot of flak for shit-posting content, especially during and after the recent US elections, but it has huge potential to become a consistent part of a users media consumption & participation diet.
It's really addictive.
Most of all I remember "Creature of Havoc" ( https://en.wikipedia.org/wiki/Creature_of_Havoc ) as being amazing and extremely hard. Instead of being an adventurer you play a monster with limited IQ forced to unravel the mystery of your own existence. It employed various techniques that prevented cheating like "If you have the key, add the number written on the key to this page number to open the door". One of those puzzles still has people discussing it http://laurencetennant.com/bonds/creatureofhavoc.html ( contains spoilers ). At 13 years old it took me and a friend 2-3 months to finally crack it.
A play-through can be attempted very quickly, every time experiencing something new -- you are racing through the world attempting to return to London in 80 days.
The creator, Inkle, have a more traditional RPG, Sorcery. Also good for re-creating feel of a classic D&D adventure, but I enjoyed encountering automatons in Vienna in "80 Days" more.
The map will be more linear-ish, or rather one mail path with side loops -- imagine passing levels in a game, you are provided ways to practice a new skill until you are able to pass to new level.
More interestingly, the progress through the book can be itself constrained by a kind of crypto.
The chapters in the book will be numbered and ordered at random. At the end of each chapter it will say "goto chapter 234." or "goto chapter 34 mod 12"
Now imagine the player wants to cheat and starts with a random chapter in the middle of the book. He won't be able to find previous chapter (it's kind of a one-way function). Morever, if progress to chapter N+1 is gated by a puzzle that requires skill learned in chapter N-1, he can't move forward either.
Some initial notes are here: https://github.com/sustrik/crypto-for-kids
This is all so obvious but it never solidified concretely like this for me until now.
The following domains have a bunch of stuff taken away from them, you're left with a narrow domain of very few concepts, and once narrowed it is intuitive to make a visual tool:
webforms - google forms relational forms - airtable computer aided design music - OneNote mathematica video games - unity website - squarespace crud app - hyperfiddle.net world wide web - internet explorer, or html
What other ways can we attack a large domain like "enterprise business apps" and take things away until left with a few simple composable primitives?
Web blurb, slightly different from print
Meanwhile began as a series of seven increasingly complex flowcharts. Once the outline of the story was structured, a computer algorithm determined the most efficient way to transfer it to book form, using a system of tabs to interlink the panels and pages. The problem proved to be NP-complete; it was finally cracked in spring of 2000, with the aid of a V-opt heuristic algorithm which ran for twelve hours on an SGI machine.
As a kid, I played 'City of Thieves' by Ian Livingstone. When entering that city, there is a crossing and one can pick three roads, all leading to the same city market.I always wondered: what is the best route of the three?
To do so, I first ported 'City Of Thieves' to console and desktop and Nintendo DS (after mailing the book company for permission, which I got). Then I wrote an AI that assigns payoffs to the different chapters. Not only did this result in such a map, but also the payoff it assigns to each chapter: https://github.com/richelbilderbeek/CityOfThieves/blob/maste...
The question is still unsolved though, as I do not trust the implementation of the AI :-)
I didn't like the idea of lying about reading but I was OK with gaming the system by reading 'choose your own adventure' books.
I would pick the dumbest options because I knew it was likely I'd die fast and the book would be over.
I only read a few of these way back when, so I don't remember exactly if this happened, but another possible take would be a sort of 'where were you when the big [whatever] happened'. How do different choices early on determine how you're affected by the Plague/ day Dublin's streets ran with Guinness/ Chicxulub impact.
I ran into a lot of unexpected technical complexities in the compilation step, for example trying to remove unreachable branches of the story, and optimizing situations where branches merged back together, or removing variables that were being tracked that no longer had any effect on the story from that point on. It was a fun exercise. It makes me wonder how earlier CYOA books and series were actually written. How hard was it to keep track of the various branching plot lines? Were there ever cases where "bugs" were published? I was a massive fan of CYOA, especially Steve Jackson and Ian Livingstone's Fighting Fantasy series.
Japanese Visual Novels are basically CYOA with pictures. The number of decision points varies depending on the book in question, but the basic structure is still there.
Twine  is a system that allows you to create stories, essentially CYOA but with the option of adding variables. For instance, you could have an option that is only selectable if you found a key earlier. This bridges the gap between CYOA and classic text adventure. Since Twine outputs HTML it is also easy to port wherever you want.
Finally, there are a number of online community CYOA. This being the Internet, the quality is varied and many of them are pornographic. Probably the biggest is Addventure
 http://twinery.org/ http://www.addventure.com/
In one short story he proposes a kind of reverse CYOA book: A book with many beginnings but only one ending.
Still the maps match up with a lot of the examples I have seen of flow charts/maps/grids from authors who scope out their stories and then fill in where its important and interesting with the actual text we get to read
One more that I don't think was reachable from the article is on the blog These Heterogenous Tasks:
Or I was maze-solving. Probably both.
I wrote my own gamebooks using a simple notepad and "turn to page XXX"-style narratives. In the end, they are just programs that you follow. :)
To this day, I'm still fascinated by them and recently wrote some sites that let you create CYOA-style adventures yourself and with others; http://www.thiswayorthat.club is one of them.
I really enjoyed http://chooseyourstory.com/story/ground-zero and http://chooseyourstory.com/story/dead-man-walking-(zombie-su...
After we were done I added some code to output the graph structure of the game, rendered it with GraphViz, and gave it to the artist, who came up with this: https://twitter.com/rmodjeski/status/455184159401472000
I've had a blast reading this together with my 7 year old son.
(Disclosure: I am one of the programmers)
Having newspapers write an article about it is probably one of the more effective ways of dealing with this.
Konami treated Kojima like shit, Kojima left and took most of the talent with him. What Konami has left is a pretty cool engine and a hot IP, no creatives. MGS fans know it and don't respect Konami's next project even a little. Zombie-survival alternate universe, really?
In contrast I don't even know what Kojima's next thing is, it'll probably be pretty cool though. Kojima is good at over the top absurd and awesome.
Also, it's a fucking travesty that Fox Engine won't be used for anything important ever again, because for an open-world engine it runs like a dream. Too bad the online multiplayer is a non-stop cheatfest.
Am I missing something? Perhaps it's more common in Japan's video game industry to work in the same place for life?
The judge will only want to know one thing: is it true whether or not that person worked for Konami.
Is there no paper trail to substantiate that? Contracts?
Did they have no signed contracts, and get paid in cash?
It's unfortunate that a slew of IPs will go down with Konami, but they haven't been looking great in awhile and this is just a conclusion of the signs in the past. Silent Hill, Metal Gear, Suikoden, ZoE, even those weird late 90s PS1 games like Broken Helix will always be remembered by me.
I had a start up that tried to do this a year or so later, after I left the company on bad term mind you. The lead basically tried to bully me. I just up and left on the same day that he tried that.
I read this and look around where I'm at, palm trees and beach. Oh right I'm in California.
I tried several emails to explain to the lawyer. He made bunch of bs excuses.
So I immediately googled for a cease and desist letter and send it to the buddy.
He basically asked me if I know who he is.
I don't care if your Donald Trump. If Trump couldn't pass the muslim ban good luck with you trying to ~~force~~ coerce me to remove this experience from my linkedin.
After that I never hear from the lawyer again or his company again.
A team I worked with in South Korea regularly slept at the office because the Korean lead was there slave driving the crunch. South Korea in itself is hard to even launch a game there because they demand percentages and you must have internal teams/representatives, similar with China. China team, the employees were always in a fear state of making the boss angry or doing the wrong thing. I noticed that it led to releases just to meet dates even if the work was incomplete just so the boss would not get mad. Working with them the devs told me they regularly ship when not complete just to satisfy dates and it led to many issues especially at hand off points because they knew it wasn't fully functioning.
I think overall companies in games think they can get away with this ownership/authoritarian type attitude anyways, but it might be easier in Asian cultures where there is a more authoritarian lean.
I know it's generally a bad idea to badmouth employers, past or present, on Facebook or other social media... but punishing employees for merely "liking" a post is more excessive than I've heard before. Maybe it's a cultural difference, but it seems very Big Brother of them nonetheless.
I'm sorry, "could not show this application to the chairman"?
Goes to show you once more how many people on positions of power have the emotional intelligence of a bully teenager but they can't be touched.
and yet here we are, slaves to whims of a few thousand people globally who control 35 trillion dollars in assets.
This virtually guarantees that if you want to make a liveable income, you either need to work for or sell to a truly psychotic group of individuals who control an arbitrarly large portion of wealth thanks to a host of monetary and legal policies.
(Don't tell me US is the same: the transparency with which they wash their dirty laundry in international public waters, and the fact that they have many layers of "social backups" are just two reasons to believe otherwise.)
Thank god the author hasn't lived through an event where everybody you know is affected by the event. The ability to say "I'm okay", say it once, and have everybody you know on FB see it is a huge stress reducer. It cuts back on the number of "are you okay?" messages you receive during the event when you may not have a lot of battery or a lot of spare brain to dedicate to answering lots of bullshit inquiries.
If he's feeling stressed out because of FB opening the "I'm okay" service in that small area for that catastrophic fire, he's being a self centered jerk. I guarantee that FBs service is helping some poor soul mixed up in that mess.
Edit: The Facebook safety check feature is not unique to Facebook. In many ways it mimics Japan's disaster message board feature. Every teleco in Japan offers this service:
I really think the author has lived too lucky a life in a place that has not suffered overmuch from disasters. He can't see beyond his own narrow vision.
Why does everything Facebook do have to be so heavily criticized?
Safety Check is a wonderful feature. If I remember correctly, it started off as an internal hackathon project that got turned into a full feature. They get shit if they turn it on (here), and if they don't turn it on (past tragedies where they failed to turn it on).
Why does everything Facebook do turn into a riot? After years in the industry, 95% of things that are "bad" that come out of big companies end up being well meaning and just look malicious without context. Hanlon's razor is real.
Yeah, Facebook has to fund all this somehow. Yeah, they are going to make their ad space extremely valuable with all the information they have. They don't sell your raw data. They sell access to you like the rest of the industry.
Those creepy ads that you saw based on some conversation you had? Turns out that they're NOT listening to your mic or whatever. It's either confirmation bias or something you're not thinking about.
Those friends that they suggest with a new account? Turns out your friends posted pictures of you on Facebook, and Facebook knows how to do facial recognition.
It feels like everything Facebook is overblown on HN. What am I missing?
Edit: I should have said this originally, but I'm a former Facebook employee, now at another big tech company. I try not to be too controversial in writing, which is why I made a throwaway.
That's the problem.
It should be a positive notification only, without any negative one. People can say they are safe (I see value in that). But facebook should not say anything at all if someone has not declared themself safe.
Stop using Facebook. Start telling your friends and family to do the same. As the "smart computer person" in many people's lives, you can be the voice they need to hear.
>Facebook CEO Mark Zuckerberg committed to turning on Safety Check in more human disasters going forward, responding to criticism that the company turned on its safety feature for Paris but not for Beirut and other bombings.
I'm not buying this statement. Where is the evidence of this? The article features two tweets from nondescript people stating they think the feature spreads unnecessary fear, but features no tweets from people who actually felt unnecessary fear. Are there any cases of people who felt afraid because their loved ones didn't check in even though they could have? Otherwise to me this argument is just speculation.
We would be much better off if we stopped accepting fake apologies and 'the algorithm did it not us' excuses.
Facebook employees programmed this thing under, I assume, the direction of management. This is Facebook's fault not some magic, wibly, wobly force. It's one thing to have a bug, but this is working as specified.
It was the same recently when we had a storm in New Zealand, and they activated safety check for the entire country. I don't think it even ended up raining where I was at the time.
Back in 2001, I was in India when one of the worst earthquakes to hit that state in recent memory struck. I lived with my grandma and grandpa, and rest of my family - mom, dad, sister, uncles, aunts, cousins were at a wedding. For literally 7 hours we had no way of communicating with each other - they didn't know if we made it or not.
So yeah, Safety Check tool is just fine in my book, just mere act of being able to say "I'm OK" makes a massive difference.
Wed be better off checking in as safe after our morning commute
I think Facebook's Safety Check is a good feature, but the implementation is pretty dreadful.
I don't really buy the argument that you would just assume someone is safe before; you would absolutely have thw worry that if there was a disaster in London, you would want to know if your friend is safe. Previously you couldn't easily contact them, though: if you even had their phone number, calling them internationally wouldn't be easy or sometimes even possible, but often you'd have their address, and hope they would respond to you. Now it's much easier to keep in touch.
I also feel like people are making controversy over nothing when they think that asking if they're safe when they're in London during that fire is too much if they're not in the vicinity. Facebook is in a catch-22 here; Facebook either knows your (roughly) exact location and knows if you were in or near the apartment building at the time, which would make people cry about Facebook tracking you everywhere, or it doesn't and it asks if you're safe if you're in London. Even in the image from the tweet that this article references there is a "Not in the area" button you can press. There's really no way to correctly do this without having really accurate and very up to date information about the people using Facebook, which isn't always possible.
Could Facebook improve the ways it determines if a user is in the area? Yes, of course; a simple way would be to look up IP address block(s) and see if the user is in a block they look up, then prompt them, although it's not really that simple. I also run into issues with Facebook thinking I'm in Japan when I'm not, even though I left nearly a month ago. Facebook really could also improve the UI around it; the point at the bottom of the article when it says that the writer has 100 (probably) London based friends, 97 of which are not "marked as safe", which is terrible UI. But I do absolutely disagree that this feature is worth removing based on the arguments presented in this article.
Sounds like a neurotic grand mother.
In any event, Facebook these days is only useful anymore as a convenient login mechanism for sites that use the Facebook login widget, and even there, Google's version works better.
Over 400 miles away.
This article is cynical clickbait nonsense, right down to assuming that it's impossible for Facebook to implement something for any reason but engagement. I'm no fan of Facebook but the idea that, to a man, they're faceless stock price maximizers is just stupid and frankly insulting.
I know plenty of people who immediately think of (and often call) family/friends in an affected area when a disaster happens. Depending on how far away it is, it can even be at the level of a city.
Hipster's demand to respect that shallow outward sophistication they cosplay cannot be considered as a universal standard.
For most people there is nothing stress-inducing. Merely boring stuff.
(And I just noticed I should not have include the post as part of the signsorry for any inaccuracies I may have caused)
Say to detect if something is or isn't a hot dog?
-GPS position, intent/goal, domain etc.
I'm at a dog show I would want breed etc.
I'm on the street I just want it come back dog maybe dangerous dog, friendly dog.
Also, would be cool/scary to just get back movable object 1, person 1, living movable object 3 etc. and if I give it multiple scenes from a video it knows person 1 is the same person 1 and if I name (them) Tony it keeps tracking tony.
This is actually almost entirely public data. Yes, including addresses and phone numbers and political affiliation. There are some states that is not public as part of the voter file, but you can still get it other ways publicly. For example: USPS, etc. Some states/players would make you sign agreements not to use it for commercial purposes.
The modeling info included is not public.
Acquiring 50 state data can be a bit of a pain, but there are at least two major players that will sell it to you.(I remember one of them literally laughed when I told them we would want the databases without any personal info included, because we just wanted the address to various political precinct mapping.)
If the CEO goes to jail, things will change very rapidly (CEO will manage his CMO much tighter who will first want to see an security audit not older than 6 months).
At least CEOs I have reported to as CTO were very sensitive for implemention issues in areas that could land them in jail.
Same for every other hacking (e.g. Sony) or IT failure (e.g. British Airlines crashed DC).
Also, can someone ask Troy Hunt whether he has or can get access to this data so he can let us all know if we're on it? (But will it even matter if they don't have an email address field?)
"State", "Juriscode", "Jurisname", "CountyFIPS", "MCD", "CNTY", "Town", "Ward", "Precinct", "Ballotbox", "PrecinctName", "NamePrefix", "FirstName", "MiddleName", "LastName", "NameSuffix", "Sex", "BirthYear", "BirthMonth", "BirthDay", "OfficialParty", "StateCalcParty", "RNCCalcParty", "StateVoterID", "JurisdictionVoterID", "LastActiveDate", "RegistrationDate", "VoterStatus", "SelfReportedDemographic", "ModeledEthnicity", "ModeledReligion", "ModeledEthnicGroup", "RegistrationAddr1", "RegistrationAddr2", "RegHouseNum", "RegHouseSfx", "RegStPrefix", "RegStName", "RegStType", "RegstPost", "RegUnitType", "RegUnitNumber", "RegCity", "RegSta", "RegZip5", "RegZip4", "RegLatitude", "RegLongitude", "RegGeocodeLevel", "ChangeOfAddress", "COADate", "COAType", "MailingAddr1", "MailingAddr2", "MailHouseNum", "MailHouseSfx", "MailStPrefix", "MailStName", "MailStType", "MailStPost", "MailUnitType", "MailUnitNumber", "MailCity", "MailSta", "MailZip5", "MailZip4", "MailSortCodeRoute", "MailDeliveryPt", "MailDeliveryPtChkDigit", "MailLineOfTravel", "MailLineOfTravelOrder", "MailDPVStatus", "MADR_LastCleanse", "MADR_LastCOA", "AreaCode", "TelephoneNUm", "TelSourceCode", "TelMatchLevel", "TelReliability", "FTC_DoNotCall"
Neal Stephenson wrote a book called Interface which predicted a form of tech-enabled micro-targeted politics over 20 years ago. It was disturbing at the time; it's almost considered business-as-usual now.
I believe American democracy would benefit from including the study of such techniques in our educational curriculum. When I was in school, we studied advertising techniques to help us be skeptical. We need the same for targeted political messages now.
Want the name, age, gender, home address, mailing address, party of registration, and voter history from every registered voter in North Carolina? Here is the "leak" on Amazon S3. http://dl.ncsbe.gov/index.html?prefix=data/
Except, by leak, I mean, link I got from my state board of elections' homepage.
Why is U.S. voter registration made public at the individual name/address level?
Why do the states publish their voter registrations in the first place?
Why should private campaign operations (or anyone else) have access to this data?
Shouldn't voters' privacy be protected by the states?
The real danger of data like this, in my opinion, illegal usage for voter fraud.
Find people who are likely to vote against you and likely to have poor voter registration documents, and remove them from the polls so they can't vote.
Find people who aren't likely to vote at all and vote on their behalf. In-person, the only verification required is name & address. By mail, the only requirement is a signature, which can be obtained from receipts (I assume this is available on black hat markets).
Leaving this S3 bucket as public-read allows for deniable coordination with illegal actors. I can't imagine they did this on purpose but that could be an explanation.
I don't know if it's possible, but I hope the FBI / Mueller team is able to get access logs.
Unless the company involved is sued to bankruptcy and the people involved are prosecuted, sending a strong message to companies dealing with user data, nothing will change. But that's unlikely to happen as this company is backed by the RNC.
While we're on the topic of collecting personal data of people, there's a simple solution : just don't collect it unless it's absolutely necessary. Stop asking me to broadcast my address in my newsletter. Stop asking me to submit my billing address when I make payments online. Stop asking me for my mobile number when I visit a fast food restaurant. Most of the companies that collect this data are not competent enough to keep it secure. The reason companies ask for an address to broadcast in users' newsletters is some anti-spam act which does not prevent the spammers from doing their job. I imagine it's also a requirement for companies to collect a billing address for certain types of online payments. Change the law to remove these poorly thought out legislature.
More generally, we need regulations on how user data is used by companies. They should not be allowed to store user data indefinitely. If a user closes an account with a company, retain the data for a short period (eg- 1 year) and then delete the data automatically. Companies should not be allowed to build shadow profiles of users.
Do security firms have special permission to do this? Because as a private citizen, I am pretty sure I would go to jail if I tried this.
Hypothetically, could one deliberately leak a trove of modelling data with some fake voters inserted, and then monitor the mailbox associated with that fake voter and sue any organization you don't like that sends campaign flyers for using the data without permission?
It would be good to see him make this a clear case of responsibility. Also, someone on the RNC side needs to get fired, too. I'm not sure who, but errors this big demand it.
Genuinely curious: can you really have 198 million rows in a spreadsheet?
Recidivism rates in the US show it is objectively not working, with state prisons leaving inmates to re-offend 76% of the time.  In Norway, much derided for their lavish prisons, it's 20%. 
Throwing people away and treating them like animals is an abject failure, compounded by the mandatory fill rates in private prison contracts.  It's time to revisit the whole system, top down, and make it less about punishment and more about making sure it doesn't happen again. And it should be done with data.
Where I was, there was no outside fenced area for the hour mandated rec time. It was a 6'Lx3'Wx6'H fenced dog cage. At least there was a large open window to the outside to look at from the other side of the room.
You're also mandated an occasional hour at the "Law Library", which was really just a single computer with LexisNexis and Microsoft Office in an otherwise empty 4'x6' room. That VB class I had taken really came in handy.
I learned to make some reasonable dice out of toilet paper. Too. Much. Yahtzee.
In the case of the man in the article, his case was overturned. Hopefully, he won't have a criminal record. Getting a job today with a criminal record is incredibly difficult. That's the biggest reason why recidivism is high [x]. It's great that there's a push to "ban the box" (that is, to not ask about criminal history in job applications), but it hasn't made it to all the states. Furthermore, many companies blanketly don't hire felons[x] regardless of the context of the crimes and/or rehabilitation of the individual. Background checks aren't a fair process in their review. Good-bye any real life.
[x] I'm sure somebody is going to argue that the bigger deal is a lack of quality mental health or addiction services provided to inmates and the dearth of such programs prior to conviction, and they'd be right too.
Rikers is a great example of a clearly flawed jail system, with inmates getting stuck for years without trial and sometimes killing themselves after losing hope.
Having recently gone through the judicial system as a white male, I can't image being a black man going through the same thing. I was able to buy my freedom, buy excuses, buy a lawyer to get me out of everything. When an oppressed people who already starts out behind falls into the same trap, there's little left for them to do.
Interesting read: https://www.nytimes.com/2017/04/05/nyregion/rikers-island-pr...
If you see an animal pacing back and forth in a cage, it is generally considered neurotic behaviour and a sign that the cage/enclosure is too small. My point being that it probably did make him slightly crazy and that solitary confinement is psychological torture.
People are products of their environments just like animals. A mistreated animal is also bad behaved and violent.
And if you think solitary confinement is a nightmare, how about putting 2 people in a solitary confinement cell?
"Imagine living in a cell that's smaller than a parking space with a homicidal roommate."
After reading The Jaunt by Stephen King, I was on edge for a day or two afterwards.
That we are powerless is bullshit.
And it's only one of about 4-5 good reasons right now.
Let's do it.
Productive communication and teamwork requires that we respond, rather than react.
What happens specifically with tools like Slack which start off as demi-official tool and then transcend to official is that, one gets into the habit of reacting, instead of responding.
I have personally seen this "fastest finger first" played, almost always.
There has been tons of literature written on reacting v/s responding and I need not dwell into it.
Another aspect is that there is no exit, once Slack is the primary communication tool. One is forced to use it, otherwise you are like an outcast and people more often tend to take it as a signal that the person is on its way out and hence moved away from Slack.
Rest of the drawbacks is documented in the article.
I imagine you could address this in Slack by just having everyone set themselves to 'Away' by default.
One of the things that I've found useful in Slack is turning off the "Someone is typing" message (the option is in the Display Options). I used to wait for someone to post if I knew a message was coming. Now I can drop in and out of a channel more easily. Similarly, I've muted almost all the channels I'm in so now I only need to check if someone notifies me.
The point I'm making is that Slack doesn't necessarily work the way you want right away, and that you might need address a few cultural problems with the way your team uses it. You need to think about how your team communicates, consider why people do annoying things in it, and come up with solutions that work. At the most extreme that might be writing a competitor product but for most companies there just needs to be a little more tolerance of people not answering immediately and a little consideration that gifs can be unhelpful.
In my opinion the chat model is simply bad for effectively distributing useful information because it disappears in the noise.
Perhaps what's needed is a combination of the forum-style threaded conversation model for information sharing and work conversations and a chat room which is explicitly for the water-cooler stuff, so that people don't have to be always-on (and constantly interrupted). It should be safe to turn off when you need to focus.
I can't think of any five minute period during my entire tenure where the Slack tab didn't have a little red circle telling me that I absolutely needed to check it right this second. There was no way to filter notifications beyond "Everything", so that little bubble would go up every time anybody in the company pressed a key. And heaven forbid somebody typed "Good morning, @channel" (which happened 20 times per morning per timezone), because then you'd get the dreaded Red Exclamation Mark in the tab.
I can't fathom how anybody would have been able to work if they had gone as far as allowing notifications to be turned on and make noises at them every time Slack thought something Important Enough To Interrupt You had happened.
"Thread-first communication" - We have it. Email threads. I've been using Zoho Mail. It's one step ahead and has modern concepts like streams/commenting and sharing built around emails. Very useful.
"Truly transparent conversations" - What this means is, the knowledge has to be highly searchable. Also, not all threads have to be public. With emails, you can either broadcast an email company wide or you can opt to pull in relevant users into a conversation. Makes a lot of sense to not broadcast everything company-wide, isn't it? More importantly, emails are easily searchable. Once received, I'm confident that it's going to sit there in my inbox forever, without worrying if it's going to be deleted later by someone who has authority.
"Leaving out the online presence indicator" - Exactly. Emails.
Maybe there's some more important key aspects that the product's covering up for me. As for me, I guess emails are simply good enough!
The choice of asynchronhous vs. synchronous is a false dichotomy. Development teams I have been on use a combination of both since forever. Before Slack/Hipchat there was XMPP/Jabber before that was IRC. It must be business people binging on the glory of synchronous communication then having a hangover. In my experience this has never been an issue.
Sametime was a big part of IBM culture. IBMs work from home policy was tied up with it. At work means you are online and responsive to Sametime. A typical meeting involves people sitting in a conference room Sametiming comments to each other about the speaker.
One nice thing about Sametime is that you chat history is in XML files on your computer. This allows you to use grep to find past information.
IBM started to use Slack when I left. On the one hand this allows you to collaborate with people outside IBM, but downside is you need both tools running. Slack for the cool kids, Sametime for the old guys.
My new job is giving me a new experience- video conferences using Google Hangouts. Many times with googlers on the Google bus.
I have all this persistent information! New people need to get up to speed fast - where do I put it? Cue the Wiki fad.
I have to get things done and elicit quick feedback ASAP! Cue the Chat fad.
I want async communication so I'm not bogged down all the time! Email and Forums.
There's a time and place for all of these.
Here's my feedback (from someone who currently leads the leads of 7 remote engineering teams):
Not knowing if someone is available and if they will get back to you soon is very very bad for many conversations. For example, when you have a C-level asking for rapid turn around on something that needs to be done by someone multiple levels away from you.
In remote teams, sometimes people disappear (e.g. car accident or civil unrest in a random country), and if you have no way to know, you'll eventually get fed up.
The feeling of cohesiveness that comes from having realtime conversations, brainstorms and other collaboration is very important to managers (especially in remote teams) and convincing them that realtime talk is not a good default is going to be quite hard.
As a result you will never be their default communication channel - at most you'll be the place they discuss bigger problems. But because you're not the default, you'll get destroyed by slack, who is the default. Most times people use the tool at hand that they are most familiar with, not the best tool for the job. I would encourage you to write a slack bot that reminds people to use you & helps them easily experience value from your product without you having to win the default spot.
Finally, most people will have serious difficulty seeing the difference between your product and other product management tools that allow commenting in detail on issues being discussed.
The problem with Todoist and Wunderlist and the like is that it's work based, not communication based.
Slack is awesome at communicating. But it has no checklist, no todo, no collaboration tools that fulfill most needs. They're all half baked.
Wunderlist has checklists and todos and can be used to track collaboration efforts by linking Google Docs or Sheets to a task for example, but communication is still horrible. It's nowhere close to being a main communications channel.
So most end up using a combination of apps.
But this is also one of the greatest detriments to productivity. Having to juggle apps.
Now we're starting to hire more developers and I'm realizing how disruptive it is for them to be pulled into Slack conversations all the time. So I've started using email more for communicating with the devs, and that seems to work reasonably well.
The questions I have are: (a) is it possible to have a single communication tool that handles both real-time and async properly and (b) does that even matter? Maybe email+Slack is a perfectly fine solution. Either way, it's obvious that only using Slack isn't a viable option.
It is also interesting that we as a community have completely sold out federated independent operators with a common standards defined protocol for soloed tooling.
Nor should you, really. Imagine trying to do that in an office-based organization! Slack, to me, is a bit like our digital office space.
You can also use highlight words to ensure you don't miss any discussion on topics you're particularly interested in. We have a guy who worked a lot with our billing, and he is magically always present whenever anyone says "refund".
We've always had chat software, but mostly we talked to each other during breaks or after school using our desktop computers.
Then cell phones became normal, and SMS was normal, later on, it was chat apps, social media. Things sped up by a factor of a billion.
All of the sudden we have entire conversations happening as the teacher is lecturing, people are making weekend plans, gossiping, bullying, relationships are forming and falling apart in minutes time.
Honestly, I can't even imagine what hell school must be now that everyone has social media and ephemeral messaging everywhere.
And I believe Slack did a little bit of this to the workplace, we've always had chat apps, but it was always a little bit to the side, but Slack integrated it with our work tools, now we have all these notifications from services bundled in the same software that offers you public channels, private channels, private groups and direct messaging, there is a whole new level of communication going on both on and off the record.
In some workplaces, I felt like it was school all over again.
If you want to discuss some large changes, sure, IM is a bad place. Conversation will probably get interrupted, derailed, etc. That's where, in my opinion, things like Google Docs or plain simple GH Issues shine. The format forces into better thought out, longer messages.
For other things, you need to organize the chat rooms. If you have 10+ people and 2 chat rooms (I think slack defaults to #general and #random) then yes, sounds bad. But instead having quite a few chat rooms with <10 people each might work out well. Even having multiple rooms with the same people can be good to separate different topics of conversation.
"Always on", I believe, comes not from the IM itself but from the culture of the company. I know places where everything is e-mail only and people constantly refresh it. Equally, IM is mostly ghost town outside working hours at other places. And Slack has decent notification settings to make sure that you don't go crazy.
So, IM has it's place and provides value if you are willing to make it work. It's just not a 'it just works' solution, which is hardly surprising.
The goal is to help your team be available and responsive without compromising productivity. To achieve this, incoming requests to a particular team are collected in a "feed" channel. The notifications ping specific people on rotation or on a sequence (according to rules) until claimed, the conversations are explicitly resolved with a "closed" action. We provide assistance for things like inserting knowledge base and template responses, and tagging the conversation. Integrations via Zapier mean a conversation can easily turn into an Asana task, Github issue, etc.
If you're struggling to manage the chaos, give it a shot and see whether it helps.
The only actually useful feature of this app seems to be that it's thread oriented, but you can mitigate that with enough irc (okay, Slack these days) channels as well.
It's easy to see how you could design a "process" around using something like Slack to achieve the same experience and if you don't design a "process" for using Twist then it will suck just as much. We tried to heavily use channels in HipChat as a threading and noise-avoidance mechanism, it worked well but we often ended up with a lot of channels with the same people in; at which point discipline becomes stressed.
Everything else should be handled through email, to allow people to consider and tailor their responses, to allow others to be added to the conversation through a CC: of a single focused thread, and to provide a directly searchable historical record of conversations.
The problem lies between unstructured and structured information. The problem is one of process vs flying by the seat of your pants. Ideally, you want processes for every repeatable action, and more flexible means of communication for new events that need to be handled quickly by those empowered to solve them.
However, every repeatable process starts as an one-off event: Customer requests for the same actions pile up until the moment you realize your app needs a new feature; Faults occur and get solved up until you realize you need to engineer a solution that prevents or automatically fixes them; Internal processes get lost and hang until you decide to redefine the internal workflow.
What is missing is a simple way of moving unstructured communication (sync and async), onto structured communication. Chats should end up as feature requests, bug reports, documentation or some other permanent medium that fits into the process.
If the unstructured->structured bridge is solved, you don't have to monitor chats (or mailing lists, or forums). If they result in something meaningful, they will appear in the structured, process-oriented flow. Monitor that, and get involved in chats where you are directly mentioned. (and skip all gif chats, really)
I also considered creating some biz comm software myself, but only jotted down some ideas
0 - https://bitbucket.org/snippets/cretz/Xopoj/retzle-conceptual
It didn't seem to work well. Since all members of all groups were from all over the world and rarely shared a timezone, no conversation could really take place in real time.
And since there are no threads or topics or anything, it's almost impossible to go back to something that was said a while ago.
I think a subreddit would have worked much better.
I'll be answering your questions. Thanks for the support!
These problems can be mitigated in a work environment where you can enforce cultural mores, but in a social environment (like within a group of friends or some other non-work related community) this becomes more challenging.
There is probably room in the market for a messaging app that prioritizes reaction communication and suggests that more thoughtful communication take place in a more permanent project management tool.
What's your sense of how Twist and Basecamp compare?
If you work on larger features that require quiet time, then a real-time chatting app can be quite annoying. My team tends to release lots of small features very often, so fast communication is important.
> Group chat is like being in an all-day meeting with random participants and no agenda.
1) Turn off notifications (except @mentions or DMs).2) Code uninterrupted.3) Check in when I'm taking a break.4) Profit.
Our company culture is such that Slack is mostly for notifications from our toolset, so that probably helps. Also, we're small (< 20).
This sounds really creepy
a) Built in reminders (I don't want to sign up to todoist also)
b) Post formatting
c) Ability to mute threads
In particular, there was CJSX - JSX with CoffeeScript. Writing React components using it was an absolute joy in comparison, and helped eliminate a lot of noise making it much more obvious what a component was doing.
But yeah, it's been hard to convince others that it's worth the investment, when honestly it's really just a personal preference at this stage.
The only issue I foresee is tools being built around the use of ES6 that compel you to abandon it.
I've learned that less is better. I don't use JQuery, I don't use Lodash. I don't use React, I don't use ImmutableJs, I don't use Webpack or CommonJS. All of these tools are more a burden that a blessing and you just end up stacking on 100 dependieces asking "Is this really better?"
I learned this lesson with Cucumber/RSpec/Caybara and etc.. I started asking why I had to use these over plain old TDD and so I used TDD for a month and I found out everything was totally fine.
I don't even really use Arel in ActiveRecord. I just write raw SQL that serves JSON directly back. I made it easier for me to organize SQL in partials just like views and to injects variables and conditions into my SQL.
I went to great lengths to evaluate all these tools because as a work-form home contractor I can afford the time, and the "what if" really bothers me.
I still working in Rails with Sprockets and CS and I write all my SQL by hand.
Less is Better.
All these kids want to do a bunch of busy work and for no good reason. It makes the feel productive.
I have tried dropping CS but the difference in productive was too great to drop. Its not that I'm married to CS its that it does what it says. It makes your JS concise so you can be more productive.
I felt AngularJS dying and so I spent 3 months researching React and building client apps in React. I just didn't get all the work extra work and settled on Mithril.
The hardest thing was giving up writing HTML-like templates like I did in AngularJS but I remember that was my first aversion to AngularJS where I swallowed the medicine. I had to swallow more medicine to unlearn that. Mithril paird with CoffeScript makes writing markup in CS a joy. If I had to do that in regular JS I could see why people would be compelled to use ugly JSX.
Other weird coffeescript quirk: "x isnt y" is not semantically to "x is not y".
Now JSX is not weird anymore, it's just horrible.
Somewhat surprised nobody has mentioned the upcoming release of CoffeeScript 2.0, which outputs ES6 natively: http://coffeescript.org/v2/
It's in beta2 now and ready for trial.
Anyway, I didn't have enough time to finish the project, but somebody took the idea and did most of the work .
Would love to explain more, but have to board the plain now :) These links might help:
The whole JS stack switches every 2 years or so?
CoffeeScript was an important crutch and stepping stone but we're maybe 2 years past where it has been past it's prime and this will make updating and improving legacy projects a lot easier.
Personally was never a fan of it - it used too many Ruby idioms for my taste and produced noisy code that was a pain to debug (at the time) - but it did spur the development & adoption of other, better, systems (ESNext transpilation, TypeScript, etc)
AMD, was able to launch a pretty competitive CPU despite massive delays because Intel has barely improved the ipc of their processors over the last 5 years.
Meanwhile Apple is betting on iPads being the future computer of the Everyman and they make their own chips. Microsoft recently acknowledged that windows basically has to run on arm for the future proofing of their platform. I guarantee you start seeing more arm based windows computers soon.
Intel recently told everyone they're willing to sue for patent money, the last desperate act.
Intel better have a leapfrog cpu in the pipeline or it's over.
By contrast, the raspberry pis and even the Ci20 are significantly more stable and easier to work with. Their specs far more truthful.
- it was too expensive compared to other BLE and WiFi capable SoCs or combinations of chips. - x86 compatibility doesn't matter. - power draw (~1W) is too high for the places where one would want to use this SoC. - the Yocto -based SDK was a mess. Every feature had a caveat and it was a pain to build. - there was never a clear commitment from Intel that they would make these in bulk for manufacturing.
- low power draw (~300mW), even lower at sleep (50mA - nA depending on what kind of sleep), - SDK is FreeRTOS based, - the "MCU features" like GPIO, PWM, etc, actually work all the time.
Why now? They just announced that they're cutting spending down to 30% of revenue by 2020: https://www.fool.com/investing/2017/05/12/intel-corporation-...
And Joule: http://qdms.intel.com/dm/i.aspx/C3391A8F-693F-418B-B9B5-03A7...
They got cozy with the monopoly, seems the bills arrived.
We can only hope that someone at Intel has realized IoT is a total tarpit, and is getting out of the product segment entirely.
Reading hackaday comments it's probably from the documentation and Intel's doing, not for the technology on itself. I am guessing that open source OR community > closed source or company (as in Raspberry Pi with a great community vs Galileo or Arduino vs anything else) for these kind of things.
It's just that the x86 was always so huge that all the other projects never got traction.
That would totally suck as we are pretty heavily invested in Edisons.
All of these chipsets had (and still have) huge promise, but have been mired in really puzzling and terrible board design issues.
You can tell that there are two different groups at Intel, the "Core" and the "Iot".
The Edison was super powerful, price competitive, and an honestly wonderful platform to dev on. YOCTO, while a weird decision, was a pretty vanilla Linux flavor and easy to pick up.
With all that promise though, the botched the silicon. The 2nd cpu on Edison, the Quark 100mhz one, never actually worked. They were shutoff in firmware from day 1 because of presumed hardware issues.
Even worse (and the reason we stopped using Edison), the SPI bus had so much electrical crosstalk on it from not being properly routed or shielded, you couldn't use it at anything over 25hz with a SINGLE bus endpoint. This removed 90% of the real-world uses for the Edison to drive displays, sensor and motor arrayset al. Intel knew it was a problem and consciously decided not to Rev the board to fix it.
Gallileo and Joule are both underpowered and incredibly overpriced devices. Today, the raspberry pi 3 is the hobby standard, and in nearly every real world use case, it is orders of magnitude more performant at 10% or less of the cost.
Intel IS is trouble, because this is their third botched attempt to enter the world of embedded computing and mobile computing.
First was the Atom, which isn't bad, but is too power constrained to compete with ARM. They made some good efforts here, but the cost is higher and perf/watt significantly lower than ARM.
Second was their foray into mobile, trying to branch from Atom. Anyone here ever use an Intel powered phone? Well they spent billions on it, never to have a mass market device actually appear. Same problems - while have equivalent performance to ARM, prices were 30-50% higher and performance per watt was significantly worse.
Now here we are with attempt 3. With the same issues. Intel fundamentally doesnt know how to design, manufacture or sell embedded chips.
It's a completely different market motion, different customers, different constraints, shorter cycles and much much different competitive landscape.
AMD isn't going to "beat" Intel. They have fundamentally the same problems. Both AMD and Intel aren't going to go bankrupt, but they are going to continue the slide into much smaller scale manufacture.
They are both being eaten by the dozens of ARM vendors, by the FPGA movement, and by public cloud data centers. It's a reduction by a thousand cuts, making it that much more difficult to do anything about it.
I stared at it for what felt like minutes and then said, if I looked in your search history would I see you looking this up on stackoverflow?
The guy said "yes" and I said I would make it work by asking you to send me the link to the stackoverflow answer.
He laughed and said "you got me".
Same company different interviewer asked me to explain the "pros and cons of Java vs Rails."
I turned the job down.
"Where do you see yourself in 5 years?""I don't know. Are you reading these questions from a textbook? FYI they're not very effective if you want to find someone who will do the job."
"What's your greatest weakness?""Trick questions in job interviews."
This is obviously not good advice; I have just reached a point in my life where I will not be made to dance to the whims of the interviewer, despite which the job would likely go to one of the employee's friends rather than it being given to me anyway.
One of the few merits of this approach is it tests if you can be frank with them or not. If they get offended at your lack of sucking up, then you probably wouldn't want that job anyway, because I find that if you suck up during the interview, you will find yourself struggling to maintain that ideal forever if you do get the job. It's better to be upfront about what things you actually care about.
Indeed, you (almost) never need to reverse linked lists in practice - but you often need to chase references of one kind or another (database, pointers, etc.) and do some manipulations on them that would result in a different list. If you have 20 data points, it doesn't matter what you do, but if you have 100M or 10B points, it makes a great deal of difference whether you do O(n), O(n log n) or O(n^2) or O(n!).
I think this is a good question, because it is an abstract version of the kind of problems that any non trivial code has to address. It is easy to describe, and easy to solve if you know what you are doing. I don't use it anymore because it's too common to be useful.
And yet, through the years I've gotten the feedback that it is "far detached from real work", "tests how good I did in school rather than how good I am" and various other comments -- and almost always from candidates who did poorly; I've never gotten this feedback from anyone I would consider competent.
 I try to get feedback through whoever referred the candidate, whether it's the friend-of-acquaintance, or the recruiter. I don't, in general, trust direct feedback from someone I turned down (or hired, for that matter).
This part got a laugh out of me.
Otherwise, the analogy feels really stretched and at times feels straight up incorrect. For instance, I've never sat through a technical interview administered by non-technical staff. I've likewise never been quizzed on the history of computation.
I agree that the programming interview in the US can be overly algo-and-whiteboard happy, but I think this critique is unfair and possibly even outdated (my most recent round of interviews involved more live coding, and less whiteboarding, than when I last interviewed 4 years ago)
Interviewing bad people at good companies goes more like this:
Q: Can you explain the difference between a noun and an adverb?
A: I've worked at the UN for 20 years! I'm an accredited translator! I translated for Putin and Obama!
> Weird analogy. Companies don't ask candidates the history of binary search trees, computer architecture, or anything like that.
A better analogy would be if they gave this translator a particularly challenging piece of text to translate -- for example, one that didn't have a clear right answer and the candidate had to discuss different tradeoffs.
But... then that doesn't seem like quite so silly of an interview process.
There are absolutely valid criticisms of whiteboard interviews, but most criticisms made are either based on terrible implementations of whiteboard interviews or based on stuff that's just incorrect. (Yes, it's totally fair to criticize a company who conducted a flawed whiteboard interview. But that criticism doesn't apply to the system as a whole. That same company could mess up whatever your favorite interview style is, too.)
> By the way: I don't actually know how translators are interviewed. But one of my best friends interviewed to be a journalist with some major New York newspapers (WSJ, etc).
She was already a journalist before this, so they had lots of public writing samples for her (analogy: GitHub code samples).
Did they just hire her based on this? Nope!
She had to do a live writing test (analogy: whiteboard coding interviews). She also had to do a pitch session to talk about different potential stories she could theoretically write about (analogy: design/architecture interviews). Plus some behavioral interviews.
Why not just look at her writing samples? Unlike for coders (which might not have public portfolios representing a significant portion of their work), basically all of her work product was actually public. So why not just hire from that?
Well, because all they see is the final output. They don't know what direction she was given, how long it took her, how much editing/collaboration was involved, etc. A crappy writer in a particular environment can churn out good work -- because someone else is doing a lot of the work. Looking at the final result is actually not a great measure of someone's skills.
Coding interviews aren't that special.
In reality, the sad state is, in my opinion, confluence of the following forces:
1) HR people more often than not have barely any clue about the topic. They must, unfortunately, play this charade because if they knew the topic they wouldn't be working in HR.
2) The technical people who prepare the questions typically believe they are too busy to spend time thinking about the problem and instead decide to settle on any test. The assumption is that a good candidate will be able to navigate any kind of test better that a bad candidate. In view of the technical person, this is just a screening, the real purpose being to elliminate as many phonies as possible.
3) The candidates more often than not have highly exaggerated view of their abilities. Unfortunately, high demand means that the market reaches for lower and lower quality of "resource" leading to comical situations where a large portion of the workforce (especially in countries like India) is developing software by shuffling around keywords until the code compiles which entitles them to call themselves Senior Engineers. Real senior people have no problem finding a job to the frustration of others who find the situation "unfair" and the entire process "rigged".
One more note on the process: while the cost of failed interview for candidate is quite low, the cost of making a mistake is very substantial for the company.
It is important when doing lots of interviews to have a question that you know well and can be used to benchmark across your interviews. Something relevant to the job is an added benefit.
And also, http://www.jasonbock.net/jb/News/Item/7c334037d1a9437d9fa650...
"Could you translate 'Donde es el bao?'"
"I'm sorry, I don't do well on tests. I thought we were going to talk about my past projects on my rsum. In fact, I'm quite offended by your question. Good day, sir!"
Google: 90% of our engineers use the software you wrote (Homebrew), but...
Haha, what feedback?
lol... I actually won a trans-comp when I was in (middle?) school: I was attending a catholic school and we were taught latin and went to some translation competitions: you had to translate a chunk of text as fast and accurate as you can. It was fun :)
...we do Agile. Everything is in a flat structure, except when it comes to salary and responsibilities.
I really should have left at that point: >1 hour lost there...
And regarding "full-package" translators I think that web developers should be able to write both frontend and backend part of an application. It is not that difficult to learn. Programming is not something you can learn once and then repeat the same actions for the rest of your life.
I mean, we can whine all day and remind each other that it really does suck, but that does little to address the problem.
Oh, I can only hope...
Here in NYC I have never had unreasonable interviews even close to that. And I interviewed for a lot of senior developer positions and consultant positions.
In our own startup we have a completely different approach. Our motto is "People live lives. Companies build products."
We like to hire and work remotely because that eliminates geographic restrictions and lets people work asynchronously. We've found that the better the system for asynchronous communication, the better the long-term productivity and maintainability.
We use a common folder structure, code conventions, for each project. Developers build fully documented reusable components that are re-used across projects. Every developer is very replaceable (meaning our losses are limited if they leave or scale back their time). This is actually a great thing for developers given our compensation model (see below).
If a developer does something wrong (like checking in syntax error), we first check if this is something we should fix in the system (add a linter to the pre-commut hook). There are so many amazing open-source tools today. It's a compoundibg snowball to design a good system. Sometimes the COO job feels like an architect/developer, just like DevOps, but for people and configuring processes and systems instead of programs or servers.
We hire from anywhere and prefer to work over the internet. Even our compensation model is different than what most companies do - it aims to attract independent people and entire teams, and compensate them based on the contributions they actually do. We want to grow a snowball in a transparent way, and motivate people by giving them ownership of a product or feature instead of focusing on making them sell their time as full-time employees who commute to an office.
I'd love to get feedback on the compensation model btw: https://qbix.com/blog
That's the cooling effect in a nutshell.
You better watch out You better not cry Better not pout I'm telling you why Big Data is coming to town It knows when you are sleeping  It knows when you're awake It knows if you've been bad or good So be good for goodness sake.
That's interesting of itself, but the bigger underlying issue is that opportunities are becoming more concentrated. When only a few companies dominate hiring in many fields, their mistakes get seriously amplified. Back in the day you were fine if Google's hiring process misjudged you - you could work for Excite or Altavista instead. Nowadays if some ML algo decides that people wearing blue sneakers are worse job performers you can get screwed (without even knowing why). And even worse, the major companies (where the jobs are) often share algorithms.
This page kind of assumes the audience is already willing to admit social cooling as a legitimate phenomena, and if not, will be convinced to do so after a few short bullets and very little in the way of actual analysis (ironically, this sort of approach leverages one of the modern patterns the piece could tackle--short bursts of information, instant delivery, decreased skepticism and amounts of reflective thought).
Also, I'd highly recommend avoiding the global warming comparison. It does a disservice to your cause. It basically comes off, at least to me, as saying "our problem isn't a substantial thing in its own right so lets compare it to this other big problem people already care about and hope the very loose and forced analogy strings them along"
All this being stated, ya'll should check out Horkheimer's essay "The Concept of Man." He wrote it in ~1952(might've been 53 or 57, I'm forgetting the exact date)--and it's crazy how prophetic that essay turned out to be. It shows how all our innovation really just led to an amplification of social structures and patterns that were already emerging during the dawn of automation and mechanization. I think it's relevant to your project.
Thanks to the moral police and keyboard warriors out there normalizing contacting employers over an internet argument.
There's one side of this which is straightforward. Companies and governments are compiling data for their own purposes, which range from modeling user behaviour to profiling you so that they can sell you stuff or arrest you for dissidence.
The lines we previously defended for privacy, freedoms of conscience, affiliation and speech have been disturbed, to say the least. This has generaly been done under the surface, without involving users. It is increasingly felt on the surface, via the ads you see on FB or the recomendations youtube feeds you.
The other side of this is what I think of as a "post-history" problem. We're now transitioning into a period where reality is simply recorded. Your comment on Chelsea Manning's release is now a matter of public record. Your next Tinder date might see it and so might the HR manager reviewing your application for senior talent accumulator in 2032.
There are all sorts of implications to that, but mostly people just feel weird about it for now. Anxious and uncertain.
So... FB (HN, whatever) is a space for casual discussion. Casual generally meant private in the past. Now, some of the most casual discussions mean an extreme opposite of private. This inevitably comes with stress.
Calling it a hilling (or cooling) effect is evoking a political dimension, one that speaks to the first part of the issue. The second issue, that's more of a social issue. It's political too, but I don't think that's where the centre of mass is.
I'm not saying we should stop (although that's what might happen), just that we pause and consider what this is doing to the world. It is the undercurrent for so many profound changes going on right now.
Are we really comfortable as individuals building systems which predict someones mental (ill) health, personality traits or ethnicity just so we can sell them things, or worse, not sell them things?
You could even say that this page, and people trying to raise awareness for this issue, are harmful!
Imagine a few important people stepping up and saying, no, we will not disadvantage applicants because of their "unprofessional" facebook profiles. In fact, we value authentic, unintimidated people. The act of saying so will make it a little bit so!
We need to shift the blame from people expressing themselves, to those people punishing them for it, or even to people giving well-meaning advice like this.
(Just a crazy thought I just had. Didn't want to be to harsh with the creator, who raises an important discussion.)
One option is to bring back anonymity so people can make public, anonymous comments. Anonymity has been sharply curtailed (because terrorism) and this is, IMO, bad for society.
Another is to mandate short term limitations on use. For example if the employer wants to look at your online presence they can only look at last week of your posts and only for initial employment consideration. IMO employers should not look there at all, but maybe this may be a palatable compromise.
The chap in HR is not itching to dig dirt on employees -- he just has a distorted notion of due diligence forced on him. If he has a clear, legal definition of what he can and he cannot look into I suspect he will gladly comply. My 2c.
A related NYTimes opinion piece  encourages "help young social media users realize that their online and real-life experiences are more intertwined than they may think. Parents might, for example, cite current events, like the Harvard episode, to remind them that nothing online is ever completely private". Which is true, good advice, and social cooling.
And the nytimes/reuters version  is currently "Page No Longer Available". How does that affect your confidence that "if it was going on, you would know about it"? :)
 http://www.thecrimson.com/article/2017/6/5/2021-offers-resci... https://www.nytimes.com/2017/06/07/well/family/the-secret-so... https://www.nytimes.com/reuters/2017/06/05/business/05reuter...
By reducing moral relativism to the self and ignoring its role in relationships at large, individuality overcomes any collective moral system (be it religious, political or philosophical), and so self-righteousness assumes a form that values spontaneity and originality - the tools of personal promotion - above ethical soundness. This seems to be, in my opinion, the hummus of the most visible social outcry. Social media outrage took the place of discussion, just like opinion articles are taking the place of news reports.
Uncritical adherence to this logic harms us all. And the chilling effect strengthens it.
In the past, people fought against a static, conservative religious or political moral, in order to make room for individuality, liberty and democracy. Now we have an agglomerate of individual perspectives fighting for visibility in social media, where popularity (by any shallow measure) took the place of reasoning. The chilling effect makes public virtue even more black and white, and conformity (or social cooling) is just settling in either side. Living in the fringe that is refusal of conformity (social heating?) has become more difficult and exhausting than ever...
I don't know. Maybe I'm wrong and things were like this for ages. Maybe there is an answer in all the valuable teachings of the past that we simply choose to ignore for the sake of the here and now.
1. Giving anyone access to your reputation is inherently bad.
2. Giving some amount of people access to your reputation is OK, but the amount of people big data gives it access to is now magically worse.
(1) is definitely untrue, at least to most people. We all definitely use our knowledge of other's reputations to make judgements, and apply social pressure to make them conform. For instance, if someone you know is a rich snob, or a vehement racist, you won't hang out with them.
(2) seems ad hoc. Why would letting more people know about your reputation magically be worse? Whether someone knows about your reputation should either be bad or not -- it's not dependent on how many other people are aware of your reputation.
It could use some serious fine tuning for grammar though, likely as a result of English being a secondary language.
If the guy who owns the website is on here, I'd be happy to help out with the syntax and grammar. PM me, I'd love to help out with this - it's really well laid out.
An example - there was a discussion a couple of days ago about FB and I questioned why a commenter felt the need to create a fake account simply to comment on FB. It turned out they weren't even a current employee but an ex-employee.
If it's people I interact with all the time there's other ways of contact that are less data mined like a good old text message or phone call.
Oh yeah and my last Facebook post is well over a year ago. There's no way in hell I will post random pictures it that will show me in bad light. It's basically a slightly less official business profile.
More on that in https://domainisticationofngrams.com
And by implications I mean something more than not seeing job ads, or not getting a loan.
Fortunately, there are no such cases that I am aware of. Unfortunately, it might be just a matter of time.
We also might conclude that "meh, teens getting drunk occasionally" or "meh, people actually having a sex life" is pretty goddamn normal and get over a bunch of nonsense.
No matter what goes on around us, we still have a choice in how we interpret things and what kind of world we choose to build. There is zero inevitability here.
When Demi Moore posed naked on the cover of a magazine while pregnant, this was some sort of shocking dramatic thing. Now, it seems like every pregnant celebrity does the exact same pose and posts it somewhere. It has become prosaic.
Seriously, we can choose to be more humane to people. Things going to hell is not some inevitability.
Edit: Maybe a better example is that when 24 hour news channels became a thing, it changed the news. Before that, people were very straight laced and serious for the 30 minutes that they reported the news. This was not sustainable when reporters had to talk live all day, every day. They became less stiff and formal, more able to crack a joke and be human. They still had to treat some subjects with appropriate respect, but 24/7 news channels caused news to lighten up some.Geez.
That is a real concern to me.
The things I say in a social group of former college buddies and the things I say in a group of the local clergy are two different things. That doesn't make me two-faced: it makes me human. In fact, the ability to converse and trade with drastically different social groups is probably the essence of humanity.
Yet our current overlords that program the internet are convinced that the entire world should run as if it were just a huge version of their favorite social group. Joe tells racist jokes? Maybe we let Joe continue, but we definitely ought to score that. After all, Joe could offend somebody -- and then they would be mad at our platform, not Joe.
We are instrumenting a terrible evil on our species, even more evil than the security and surveillance state, if such a thing could be possible. SkyNet has finally attacked, and because there are no T-1000s leading the way the vast majority of the population doesn't even know it's at war.
The problem comes when they repress negative emotions and other status detractors because the cost of even being aware of them is too high. Then you have people who, in psychological terms, are prisoners of their shadow selves. They become anxious and depressed because they fear confronting it.
I don't know how it would be implemented, on what schedule, and to what extent.
I know that it's nowhere
But there is no denying that
It's hip to be square
Thoughts I've had:
Total quantity of data available?
Ability to define boundaries?
Ability to enforce those boundaries?
Knowledge of what boundaries to even define?
Who knows what about a person?
How many agents know what?
How aware is the subject of actual knowlesdgee?
How rapidly can that knowledge be further transferred?
Does the surveillor know more of the subject than the subject?
Can the subject access that knowledge?
What level of benefit (or harm) can be transacted on the basis of surveillance? Does this accrue to the subject or others?
Just raising awareness won't change anything - the system is working as intended for the people who were sold on it and the people who implemented it (bar a few unfortunate engineers who had to do it for the money). History is rife with examples of people trying to enforce a more rigid social order with varying degrees of success. Letting people different from you have freedom is not something that many people want. Think hard about the last time you thought "the world would be a better place if everyone thought like me". Then realise how many people don't follow that with "but enforcing a mind-police on society is awful".
Does anybody have a link to an online server (with public domain books)? I'm curious to see what the presentation is like. What's the typography like? Does the screen dim after 30s? What's the browser battery consumption like compared to an ereader app?
Long term, my big concern about ebooks is DRM. Amazon's most recent version (KFX) hasn't been cracked and workarounds involve getting Amazon to send you an older version of the file with older, crappier hyphenation and layout. I've started mostly buying DRM free books from Amazon, but they don't make it easy to find them.
Am I the only one who is turned off to calibre due to how "heavy" and clunky it feels? I suspect this is due to the program being written in Java. I think the author does great work maintaining the project but frankly wish it was more modern.
Perhaps this is a good side project for me to delve into ;)
EDIT: thanks to users who clarified that Calibre is written in python.
Moon Reader (http://www.moondownload.com) would be great if it had a desktop or web-based client...however, it's only supported on Android. If Calibre can give me this experience, it's value just increased immensely. Looking forward to trying this.
So far I only tested on moonreader
I was hoping that this could replace my Google Drive folder with various papers on interesting topics I'd like to read.
I hope it gets better though, because it's a great concept
EDIT: also, the forums aren't letting me register due to an error in their captcha being broken, so I can't discuss/submit my bugs there.
EDIT2: I was able submit a bug report to lanuchpad.
I like it better than amazon's kindle apps and you can use an open format.
It's handy if you've only got access to an epub, and a bit less clunky than sending via email.