I think this has pushed me over the edge.
This is just a personal theory, but I suspect Twitter's choices have done huge damage to Western Civilization, by forcing, as a medium, a very short 'soundbite' structure onto debate. (Even more so than the media which gave us the term 'soundbite' ever did!)
So that sounds like a very overblown assertion, right?
But think about Trump. Twitter is his platform, and arguably he is the sort of President a platform like Twitter most directly enables. He gets direct unchallenged access to a mass medium, a mass medium which makes it particularly hard to counteract false claims or have reasoned debate. For exactly the issues Antirez is raising.
I don't have evidence to support my theory, all I can say is I don't think I want Twitter to succeed.
I told someone we were looking at it and I too got a similar response.
Twitter seems to support less of a netiquette, it's more point-and-shoot. Besides that, it seems intuitive that the 140 chars restriction lends itself less to discussions than proper forums.
On the technical side, this is a perfect example of how AI can be used effectively, and is a (very obvious in hindsight) application of the cutting edge in scene understanding and HCI. There are quite a few recent and advanced techniques rolled into one product here, and although I haven't tried it out yet it seems fairly polished from the video. A whitepaper of how this works from the technical side would be fascinating, because even though I'm familiar with the relevant papers it's a long jump between the papers and this product.
On the social side, I think this is a commendable effort, and a fairly low hanging fruit to demonstrate the positive power of new ML techniques. On a site where every other AI article is full of comments (somewhat rightfully) despairing about the negative social aspects of AI and the associated large scale data collection, we should be excited about a tool that exists to improve lives most of us don't even realize need improving. This is the kind of thing I hope can inspire more developers to take the reins on beneficial AI applications.
Things like this make articles like this one seem silly: https://www.madebymany.com/stories/what-if-ai-is-a-failed-dr...
edit: I'm wrong. it's in other stores as well, but not in the Australian app store, which is the one that I tried.
And this is just from my perspective - someone who is not visually impaired. For the person who is, every single thing they look at and read is going to be recorded and used.
It's an unfortunate situation for people to put in, and I'm sure everyone will choose using improvements like this over not using them. As much as I would love to see a focus on privacy for projects like this, I don't imagine it happening any time soon, given how powerful the data involved is.
I imagine a future where AI assistants like this are commonplace, and there is no escaping them.
Screw Siri, thats a real AI assistant :)
Edit - I've been a remote worker for over 10 years and will likely continue as long as I possibly can
In the story linked, it seems like the author was prioritizing family or health over work at times (me: Good! We need more of that!). The employer can't just fire you for that as it looks a lot like wrongful termination. So they need to build up a paper trail.
1. They quickly and clearly told the employee of the problem.
2. They proposed a course of action
3. Their proposal worked (at the end they were happy with his work).
It was not pleasant for sure (for both sides, I bet), but would it be worse to say nothing and terminate him for non performance in a couple of extra months. My 2c.
> From my biased perspective, it is difficult to see how these personal improvement programs for a disappointing employee can ever be a constructive force.
> By the end, they were quite happy with my work.
...sounds pretty constructive to me?
I wouldn't keep someone who is not performing for whatever reason, because by the time I lose trust in that person, in my mind he's pretty much out of the door. Because this is business , it's not a charity , one is responsible for their own life.
As for those who say ' company culture and fairness ' I say your competitors are already eating your lunch
You need to use them for positive feedback as well as negative, and general chats about progress and ideas.
What we're all missing is how these things were presented - were the managers being unreasonable dicks in the way they talked about
"Mismatched expectations" about availability aside, what does the author expect when they're supposedly letting their team down?
I've had a number of conversations with managers were they've told me I'm letting them down and I'm always incredibly grateful for them. Sure, they're sometimes hard to hear, but I would much rather to receive feedback early so I can act on it and be better at what I do.
Also not to mention how incredibly valuable I find regular peer-review cycles (last 2 companies I've been to do these every 6 months). They've always found it super helpful to be told what I should continue doing well and what I need to improve on. Does everyone want to be the best at what they do?
How can the author say that they don't see the point in these plans what it appears that they were able to correct their behavior and stay at the company until it went under?
"By the end, they were quite happy with my work." Looks like the author is building this whole drama out of nothing.
that being said there have been a few put on PIP I think would be better separated. I guess from this standpoint it does serve to protect company interest.
I do know that working remote introduces all new problems, the hardest I had to learn was to not let myself be distracted and I actually ended up with a personal check list that became natural after a time. I now understand how to make sure those who get more than one WFH effectively use this privileged and keep it.
It sounds like Google is working on improvements to the process. This is important work, because mosquitos are a major cause of disease, especially in Africa, and we haven't been able to fully solve the problem with existing technology.
I wan't a documentary "How it's made: Mosquitocide". I'm willing to make one if someone can provide access to info and logistics.
Am I just making this up/misremembering it?
Edit: found a few sources.
My immediate reaction on reading that sentence was to wonder why they'd written it in some kind of Shakespearean English.
My next reaction was to feel stupid.
Thank goodness. We can't eliminate mosquitoes fast enough.
Wildlife will probably find other food sources, so bring on the weapons of mosquito destruction.
Oxitec has worked for years to filter their mosquitoes so only ~0.2% of the released mosquitoes are female. They then had to demonstrate that and more in many trials before being allowed to release their mosquitoes in the wild in Panama and Florida.
Otherwise, it's great that Google can overstep the other factors that would stop this solution like NIMBYism and working with county / municipal boards. These solutions are great.
I think this paper is relevant, but I only scanned it:
Does anyone know what % population reduction impact this process results in? They'd have males likely die after 2 weeks and that just wipes the reproductive chances of the females in that period. Google is treating for 20 weeks in dry weather, which is not exactly the peak reproductive season of this mosquito.
Or is this a similar class of problem to antibiotics becoming useless over time?
I.e. it's useful to do now so let's cross that bridge if we come to it?
Or is there something else I don't understand about this?
Or would it be legal for me to just go and release a cloud of mosquitoes myself?
Am I missing something?
What I learned:Not to make a compiler during a hackathon.
Worth noting that Clarifai just released an SDK for offline mobile DL training/evaluation. Not browser based but I'd be curious what the difference in GPU utilization is practically.
travelling salesman in js+gpu: https://amoffat.github.io/held-karp-gpu-demo/
I had in mind matrix operations for neural networks, as in https://github.com/transcranial/keras-js.
My project manager just hear "So, you're saying this is production-ready? Great!"
CPU: 0.426s 7.6%
GPU: 2.399s 4.7%
Running Chrome, Latest Stable. Windows 7. It seems odd to me that it would take 6x longer when my graphics card (GTX 690) would theoretically be much faster than my CPU (Intel i7-3930k)
https://github.com/tqchen/tinyflow would be a great showcase (and useful) project to port for Gpu.js
Never really took off. In the end OpenCL and CUDA were the winners in this space and OpenCL and CUDA, while explicit GPU languages, can be simulated on the CPU. I think this pattern will continue.
Not sure what's the root cause for this.
Benchmark on iPad Air 2 iOS 10.3.2 is close for both chrome and safari.
CPU: 6.110s 1.9%GPU: 0.487s 1.0% (12.55 times faster!)
CPU: 4.454s 51.8%GPU: 0.483s 0.9% (9.22 times faster!)
Project looks promising though, congratulations to the team.
It has a very concise, cross-platform GUI DSL ("inherited" from Rebol) which requires a 1MB runtime only on top of the host OS' GUI system.
Look what can you build in ~7kB which can run on top of a 1MB runtime, not a 100MB browser...
A few years ago I've actually built an iPhone app launcher simulator in a few kilobytes which looked exactly the same on a PPC iBook, an x86 Mac Mini or a Windows PC...
An old cell phone is like a raspberry pi, if a pi had a built in touchscreen, camera, microphone, battery, wifi, and gps.
I long for the day where I can have the same feeling on my phone.
This is really exciting!
If there is a way for a laymen to help please provide some guidance, I'm comfortable tinkering around with Linux and breaking and fixing things but this looks a bit over my head.
Keep up the great work!
As for the Alpine Linux it's based on, that stuff is seriously nifty in its own right. Any old decrepit box is a decent server with Alpine on it.
I do wonder if using linux will make battery management hard
Um. That could be a problem.
Now, the government is too afraid to publicly speak about Liu, to not sour what they have spent years trying to fix. Weak.
To me it's the embodiment of the hypocrisy of the west when it comes to the defense human rights. Let's be frank, we didn't abolish slavery, we just outsourced it to make it more acceptable. It's also a proof that capitalism doesn't need democracy or free-speech to function.
China will get democracy one day. It will be because a large swath of Chinese people want it. Although there have been some high profile dissidents in China over the last 30 years, including Liu Xiaobo and the 1989 students, most Chinese are probably not ready for their message of political openness. They're still focused on lower levels of the Maslow hierarchy like shelter and food.
These dissidents have the bad luck of poor timing. In another 30 or 40 years when China is fully developed, their ideas may be well accepted among citizenry and party reformers.
EDIT: Of course they do much worse, I was pointing to the fact that not a peep was heard on Fox or CNN.
I think that the Communist Party of China made a really big mistake, which caused a serious impact on China's national image. But I also do not agree with Liu Xiaobo's political views.
Finally, China's political environment indeed bad, but most of the Western media coverage distorted the facts, including the Liu Xiaobo event.
How much does a mile weigh?
But I do think more folks would try if they could take the college money and invest it in things like starting a business, no matter what the business is, and wish more folks had this option.
Our general inspiration is to create a new kind of data warehouse based on code management practices that haven't yet reached the data domain.
Feedback welcome. Ask me anything.
I can't imagine a future where we don't treat data version control like a necessity in the same was as code version control. I hope Quilt can fill the much-needed role of "Github for data".
So I can assume this isn't going to be afraid of gigabytes, right? I've seen services before that want to be a repository of data, and I try to upload a mere 20 GB of data and they're like "oh shit nevermind". Even S3 requires it to be broken into files of less than 5 GB for some inscrutable reason.
Anyway, do I get this right: They expect users to be experts in data analysis but not being able to load the data into whatever software they use? They want me to share data and to offload my data into their walled garden that can be accessed only via their service? If I wanted to share my data, wouldn't I rather use something more accessible?
How does this compare to what data.world  is doing? They recently released a Python SDK  as well.
 https://data.world/ https://github.com/datadotworld/data.world-py
I actually had been thinking about Parquet as a component of ETL, and if it might be possible to make ETL many times faster by compressing to Parquet format on the source and then transmitting to a destination - especially when you're talking about limited bandwidth situations where you need entire data sets moved around in bulk.
This looks really nice for sharing public data sets, but I wish that there was a better public non-profit org running indexes of public data sets.... I guess if something like the semantic web had ever taken off, then the Internet itself would be the index of public data sets, but it seems like that dream is still yet to materialize.
 https://github.com/textkit/datapak http://okfnlabs.org/blog/2015/04/26/datapak.html
Paying flat fees for access to repos is fundamentally thinking about the problem incorrectly.
The first thing I looked for was a canonical package / resource specification in build.py. Any chance supporting Frictionless data resource spec for interop?
It would be really cool if quilt could generate documentation for datasets, even if it was just column names/types. One of the issues we have is keeping track of all of the data "assets" people have pulled or created.
You guys should make the search bar a little more prominent. Took me a while to find it!
 http://cernvm.cern.ch/portal/filesystem http://nixos.org/nix/
1) Load original data from source into quilt
2) Do transformation
3) Commit transformations to quilt, with commit message
4) Run experiment
5) Do new transformations
6) Commit to quilt
7) Run experiment
Rinse and repeat.
Looking at the video and documentations, this is not emphasised at all, suggesting that edits to data should be saved as a new package.
No idea what a DataNode is so am struggling to actually see the data! Any tips?
>> Reflecting on this time later, he remembered the flashes of intuition. The work wasnt linear; ideas came when they came. One night I remember I woke up in the middle of the night and I had an idea and I stayed up all night working on that. <<
Do people think that putting Shannon somewhere like the Institute for Advanced Study would actually have quickened up his thinking? Or is a level of distracting background activity actually helpful?
The narrative my wwii relatives give is everyone enlisted. If you didnt there was shame that required an explanation
Around that time I came across an interesting idea. I don't remember if it was in the Idea Factory, or in material I read afterward, but it's related to one of central ideas from this excerpt:
The sender no longer mattered, the intent no longer mattered, the medium no longer mattered, not even the meaning mattered: A phone conversation, a snatch of Morse telegraphy, a page from a detective novel were all brought under a common code.
The idea I came across is that:
Shannon's information theory, devised at AT&T, indirectly led to the demise of AT&T's monopoly.
So I think the argument was that it made economic sense for a single organization to own all the wires, so it could maintain them with a set of common specifications and processes. But if you can reduce every wire to a single number -- its information-carrying capacity -- then this argument goes out the window. You can use all sorts of heterogeneous links made by different manufacturers and maintained by different companies.
(I'm not sure if this is historically accurate, but technically it sounds true.)
So my thought was that there's an analogous breakthrough waiting to happen with respect to cloud computing. Google and Facebook have information monopolies based on centralized processing of big data in custom-built data centers. Likewise, AWS has a strong network effect, and is hard to compete with even if you have billions of dollars to spend.
So my question is: Is it possible there will be a breakthrough in decentralized distributed computing? And could it make obsolete the centralized cloud computing that Google/Facebook/Amazon practice? Just like AT&T had no reason to be a monopoly after Shannon, maybe a technological breakthrough will be the end of Google/Facebook/Amazon.
Maybe this idea is too cute, and you can poke holes in it on a number of fronts, e.g.:
- Shannon's ideas were profound, but they didn't actually bring down AT&T. AT&T was forcibly broken apart, and there are still network effects today that make re-mergers rational.
- Centralized distributed computing will always be more efficient than decentralized distributed computing (?) I'm not aware of any fundamental theorems here but it seems within the realm of possibility. (EDIT: On further reflection, the main difference between centralized and decentralized is trust, so maybe they're not comparable. Decentralized algorithms always do more work because they have to deal with security and conflicting intentions.)
But still I like the idea that merely an idea could end an industry :)
Relatedly, I also recall that Paul Graham argued that there will be more startups because of decreasing transaction costs between companies, or something like that. But it still feels like the computer industry is inherently prone to monopolies, and despite what pg said, the big companies still control as much of the industry that Microsoft did back in the day, or maybe more.
Hmm. Was the 5 day work week common by 1940?
I have a notion that we went from Sunday off only, to Sunday plus Saturday afternoon, then Sunday plus Saturday off. Not sure when that happened, or where it started.
Sorry maybe not a flash ad. Upon further inspection it was an iframe whose contents were blocked by my ad blocker, presenting me with that grey box in chrome. I'm not going to disable it to check what it actually is.
We are looking to replace our Arima models with RNNs and the results so far has been far from satisfactory.
The usecase is: based on sale quantity in past year, predict the sale quantity tomorrow.
Regression does not consider weekdays or weekends or similar bumps and we thought RNN w/LSTM would be well suited for this problem
For even more customized RNNs such as attention mechanism, beam search as in Seq2Seq, you'll need to skip the tf.dynamic_rnn abstraction and use a symbolic loop directly: tf.while_loop
I am unsure why anyone would use Erlang for number crunching. Training neural nets is basically just multiplying big matrices. I was hoping this project would come up with an interesting approach (how about using SIMD on the binary comprehensions that can use it? now that would be cool) but performance / memory usage does not seem to be looked at here.
It is naive / uneducated to think that "Erlangs multi-core support" + distributedness will enable many things for you. How does the VM scale on 32, 64 threads? Have you tried making a cluster of 50+ VMs? Unfortunately Erlang Solutions Ltd.'s marketing has hyped many.
I am not against projects like these, I am just looking for reasons behind the choices made.
I'm in no way an expert, but I work in Erlang in my day job and just glancing at the repo, this solution can't possibly be performant. A) Erlang is slow at math. B) Arrays don't have O(1) access(ETS tables might be able to help with this). C) You can't scale this solution with more Erlang nodes(without some additional work).
I really like Erlang and want to evangelize it but I don't think this is a good way of doing it. I only see this as a neat toy but not a selling point for using Erlang..
As a side note: I noticed the repo has a feature note about adding NIF's for performance bottlenecks (native C code for Erlang to talk to). If you end up writing C code, then what are you gaining from Erlang?
So, its as custom as Apples (pre San Francisco) systemfont Myriad, compared to Frutigers Frutiger, who said of the adaptation: not badly done while feeling that the similarities had gone a little too far https://en.wikipedia.org/wiki/Myriad_(typeface)
I wonder if Apple actually designed this font, or just asked for permission and extracted it from the built-in font of some industrial printing machine.
Yeah, sounds very convenient... How exactly would that work?
Are you considered Apple will shut you down somehow?
I don't understand this part. The aspect ratio of the box seems much more extreme than 3:1.
Central America, Northern South America, Northern Africa, and the Middle East seem particularly bad.
Looking at the "walkability" graph, you see that a lot of the cities listed as the worst (Arlington, Forth Worth, San Antonio) are in Texas, where it's pretty hot.
Living in Las Vegas, I totally get this. You just don't go outside in the day time, which can put a crimp on activity.
On the flip side, in the winter it's very nice, and is the perfect time to go outside and get some exercise.
2. Temperature variations are not accounted for.
3. "The cost to exercise" (e.g. cost to live in walkable areas, average cost of gyms, etc.) is not accounted for.
Furthermore, the website (haven't read the paper yet) implies walkability is always good. Walking around during the summer time in a hot climate like Texas will get you killed 
The analysis is obviously good, however it basically says what's already obvious, rich people exercise more (in the case of United States). Is there something inherent in the areas that make people more or less likely to exercise or is it the demographics? Most of the evidence points to the latter. This raises the question: since you inherently can't change your demographic, what difference does it make?
 This happened just a couple weeks ago:
An excerpt from the Introduction:
"...Mezofanti liked to quip that he knew 'fifty languages and Bolognese.' During his lifetime, he put enough of those on display -- among them Arabic and Hebrew (biblical and Rabbinic), Chaldean, Coptic, Persian, Turkish, Albanian, Maltese, certainly Latin and Bolognese, but also Spanish, Portuguese, French, German, Dutch, and English, as well as Polish, Hungarian, Chinese, Syrian, Amharic, Hindustani, Gujarati, Basque, and Romanian -- that he frequently appeared in rapturous accounts of visitors to Bologna and Rome. Some compared him to Mithradates, the ancient Persian king who could speak the language of each of the twenty-two territories he governed. The poet Lord Byron, who once lost a multilingual cursing contest with Mezzofanti, called him 'a monster of languages, the Briareus of parts of speech, a walking polyglott, and more, -- who ought to have existed at the time of the Tower of Babel, as universal interpreter.' ...
"On one occasion, Pope Gregory XVI (1765-1846), a friend of Mezzofanti, arranged for dozens of international students to surprise him. When the signal was given, the students knelt before Mezzofanti and then rose quickly, talking to him 'each in his own tongue, with such an abundance of words and such a volubility of tone, that, in the jargon of dialects, it was almost impossible to hear, much less to understand them.' Mezzofanti didn't flinch but 'took them up singly, and replied to each in his own language.' The pope declared the cardinal to be victorious. Mezzofanti could not be bested."
 - https://en.wikipedia.org/wiki/Mezzofanti
 - https://www.amazon.com/Babel-No-More-Extraordinary-Language/...
I wonder if that can be ported to eg math education?
About this method he said, You dont need a hydrology course to learn to swim. You dont point at the water and say, This is water, this is how water works. you just throw the babies in.
However, it did enable me to become fluent in Spanish in two months. Fluent enough to teach high school physics, in Spanish, at the Instituto Americano of La Paz.
It's pretty inspiring and also shocking how someone could be so devoted to something for so long and how little it gave him other than what he intrinsically got from it.
We should be thankful that we live in a perfect time for those of us who want to devote ourselves to engineering and computer science can also reap rewards which let us have the freedom to live the lives we want. Just because what we do is useful or hard doesn't mean it needs to be financially fulfilling.
Shouldn't someone point these folks at VR headsets and livestreaming stabilized 360deg video? It doesn't matter that he can't walk and is stuck in the US. With someone to be his walking and conversation companion in Rome, and to hold his eyes, he could walk Rome every morning, telling stories, and recording it for posterity.
Anyone knows what mystery "the final few pages in the Joe Armstrong 2nd edition Erlang book" hold?
Well, the laws of Australia prevail in Australia, I can assure you of that. The laws of mathematics are very commendable but the only laws that applies in Australia is the law of Australia.
Today, encryption is a check on government overreach, and guns are effectively a vestigial hobby (unless you're in a gang or the illicit drug industry).
For those of you able to donate, the equivalent of the EFF in Australia is the EFA: https://www.efa.org.au/
The original researchers have started a PaaS solution providing an API that you can hook into your apps today, allowing you to get (lat, lng) coordinates inside mapped structures. One needs to build path-finding on top of that though.
The most interesting feature to me is the ability to add annotations along the way. I'd use that to describe wayfinding points, such as "the elevator" or "the giant ice cream cone."
This would also work well for an airport.
With such a vision there is strong ambiguity in the world over what is potentially symbolically meaningful to someone, the meanings things can carry, and the stories their symbols can follow. I think that essential ambiguity of interpretation, of the apparent orders and symbols in the world, makes nonsense of the popular idea that we may ourselves be certain sophisticated symbolic constructs in an advanced simulation.
Sorry for this but I am in the middle of doing something else and have only skimmed this paper but it looks tantalisingly relevant...
That ending was an incredibly well delivered stab at Deutsche Telekom. This is why I love vigilante security.
john --test --format=nt Benchmarking: NT [MD4 128/128 X2 SSE2-16]... DONE Raw:29037K c/s real, 29037K c/s virtual john --test --format=bcrypt Will run 16 OpenMP threads Benchmarking: bcrypt ("$2a$05", 32 iterations) [Blowfish 32/64 X3]... (16xOMP) DONE Raw:5472 c/s real, 490 c/s virtual
If such a company's database of hashed passwords is leaked, then an attacker doesn't even have to crack the hashes - the hash itself is a valid version of the password. Yet I've seen this behavior at multiple companies; only one of them pushed back against my request to remove that "feature", and I didn't stay with them much longer after that.
Single unsalted broken MD5 is a far cry from scrypt... and even scrypt is probably a bad idea with all this crypto currency hashing hardware out there, unless you have a seriously strong password.
Just don't publish hashes.
Wait, was that just a straight bash command? Is this installed on my computer?
>$ whoisusage: whois [-aAbdgiIlmQrR6] [-c country-code | -h hostname] [-p port] name ...
Holy shit lol, that's neat.
Of course, you could still crack some (problem), so keeping multiple secrets hidden through obscurity (the hashes, the salts, etc.) is another layer of security.
This doesn't guarantee security, but it's certainly more secure. But it is additive: there's no reason to just use MD5 (or plaintext) because "my hashes are secret".
This is a different situation and public keys are not directly analogous to password hashes: there isn't a reliable way of cracking public keys in the same sense that there's a semi-reliable way of cracking hashes. But it was still strange and uncomfortable to me that they would reveal this "target" (and if there were specific key generation bugs, like RNG seeding errors, people might actually be able to crack a few of them and know that they had suceeded).
Relatedly, I was thinking about the magic crypto-cracking device in the movie Sneakers. Once they had it, they could immediately use it to log on to random network-connected services, defeating the authentication. So, how is that supposed to work? How do they automatically know what credentials would be accepted for a particular service? Are there common network authentication protocols based on public-key cryptography that have the property that the verifier tells the prover the public keys that it trusts?
Because that was never the problem with income inequality to begin with.
The problem isn't that one person makes $10,000 while another makes $40,000. Those people are both struggling. The problem is that one person makes $10,000 (or $40,000) while another makes $10,000,000.
Reversal never even enters into it. If you took $9,900,000 of the richer person's money and split it between 1000 poorer people, the richer person would still have more money. Even though that would imply a 99% tax rate.
"In addition, the Tibetan herders who participated in the study had a markedly higher level of rank-reversal aversion than other subjects. This also suggests the trait is cultural"
An all too common kind of sloppy thinking, which generally kills my desire to read any further. No such thing is suggested. The trait may be learned, or it may be completely hardwired, kicking in action between ages six and ten. Different populations may have evolved different inbred attitudes to equality.
It seems like an alternate explanation is that people don't like to artificially pick winners and losers, but they're willing to lessen the gap between winners and losers if it's too big.
There are philosophical concerns about justice and utility in messing with organic selection mechanisms. Perhaps a fear of instability plays into it, but it seems like there are more nuanced narratives that can be applied as well.
- 76.87% of subjects accepted a 25% tax rate intended for redistribution.
- 44.80% of subjects accepted a 50% tax rate intended for redistribution.
It's hard to infer much beyond that.
Perhaps they need to test for a larger number or more fine-grained tax levels between 25% and 50% and see if there is indeed a step change (or other sharp decline) in acceptability when it "reverses social order". Or if there is a smooth distribution curve based on tax rate, not relative position.
Even if you take the results as intended, that subjects were considering social order and not tax rate, you still have to concede that this could be about "fairness" and not "rank". It shouldn't be surprising that a majority of people dislike redistributing more than is necessary to achieve equality. That helps explain the results with the children. Ages 6-10 is about where they start to understand "fairness" from the perspective of both parties .
Additionally, taking the tax perspective into account, "Person B" appears to be subject to a lower tax rate than "Person A" despite ending up with more money, which subjects would likely see as unfair.
Seems a more succinct conclusion is that it's about fairness. Most view the economy as a zero-sum game, and we generally measure our standing in society relative to others. So, the idea of redistributing wealth to the point where relative fortunes are being inverted may violate our sense of fairness.
as it stands the middle class really has no way of "trading places" with the upper middle class, who can always out-earn them.
a marginal tax rate above 100% would close this loophole.
yes this comment is sarcastic.
With more magnification from my Web browser, there are still 96 characters per line so have to use the horizontal scroll bars twice on each line to read it.
So, since I was interested in the article, I selected all of the text, copied it to my system clipboard, pasted it into a new e-mail in my e-mail program that reflowed the lines and used a larger font, and then read some of the article.
With 96 characters per line, apparently the Web site is determined, feet locked deep in reinforced concrete, with iron-clad rigidity, to discourage as many readers as they can.
Ah, since the OP is about psychology, the 96 characters per line and the whole OP is really just a psychology experiment?
Curious that a Web site would want to work so hard to discourage readers.
It's also very important to remember that most people who manage wealth well, on the other hand, do so in a way that benefits those who do not, and as such retain their responsibilities as money managers because they are one half of a financially symbiotic system of exchange.
There are also a lot of social factors to consider that didn't necessarily (and understandably so) make it into the studies dependent or independent variable sets. The attractiveness of each party needs to be controlled for, as there is a definite bias to give money to those we find attractive. The age of someone is another control variable that needs to be accounted for, as the elderly are generally (and correctly, generally speaking) more adept at managing money, for how else would they have survived so long?
All in all, I think mostly what this tells us is that we are cautiously optimistic about the belief that those who have less in our social hierarchy are capable enough to have more, but that it's important to preserve and honor the way in which wealth flows because it flows in a way that has kept us progressing for millenia.
I would be much more interested in reading about how much wealth they believed should be distributed rather than simply if it should. That would give us more than a binary response from which to extrapolate data.
By having clear delineations between fairings, request guards, and data guards, I think you can really avoid making a lot of design mistakes. I'm going to try out Rust some more and definitely play around with this framework! The only thing that bothers me is that Rocket says it requires a nightly version of rust - why is that necessary? I thought Rust was pretty stable by now.
So far all the Rust web frameworks I've seen have pretty disappointing performances.
I was expecting C++/Java/Go level of performance. Instead, Tokio & Iron turn out to be slower than many frameworks in Ruby, Python, PHP, JS:
Regarding fairings, it seems a missing "middleware" case might be the sorts of things that cause redirects on entry (e.g. redirect routes with/without trailing slashes to the latter as a super trivial example). Is that something you'd expect to support in some way? I think that's something that doesn't feel like it maps either to guards or fairings well at the moment.
I did see where you mentioned your dislike of rails/sinatra/... style blunt force middleware, fwiw.
> To encrypt private cookies, Rocket uses the 256-bit key specified in the secret_key configuration parameter. If one is not specified, Rocket automatically generates a fresh key at launch.
Seems like a pretty clever idea. Do other servers/middlewares offer a similar feature? Seems like it would complicate deployment/scaling a bit if the secret has to be sent to all the nodes. Especially if they could silently ignore it if you accidentally don't configure the key for some nodes.
Perhaps without a willing participant, there can be no kairos; there is only chronos?
Great book, BTW. The most memorable from my childhood.