I also get donations rebates, this is a one page form that lists all my charitable donations. Very easy very quick.
All my details are available to me online. All transactions are there and it's very transparent.
Why would anyone oppose a simple system like this?
Like ObamaCare? That came with two additional forms.Live in a high tax state and like deducting your state taxes? That makes things more complicated.Big fan of deductions to for education or child care? That comes with complexities.I could go on and on....
Now maybe your answer to all of these questions was "no", but there are a lot of people that say "yes" to a lot of these questions. It's really hard to upset that apple cart. Lobbying doesn't have much to do with it.
The more invisible taxes are to the individual person, the less they think about that money (and the higher taxes can go without them complaining too much).
Rent feels expensive because every month you write a check for rent. However, for many people, taxes are a much bigger expense than rent. But taxes don't feel as painful, because people don't write a check every month for taxes. Taxes are just invisibly withdrawn from your paycheck.
The easier and more invisible it is to pay taxes, the more you forget about how much money that really is. If you believe in constrained government, there's a good case to be made that we should make tax payments more visible, not less.
While their digital tools for filing taxes make the telegraph feel modern, the "in person" experience is full of helpful people and takes about 90 minutes including travel time.
My main criticisms of the filing process:
1. Tax bureau has a one month timeframe where you can go, in person, to file "on time", but only during business hours. Any 9-5 worker must take time off to go file in person. It's a pretty nice customer experience - The volunteers in the bureau help you file your taxes with highest deductions possible, it gets crosschecked by a government tax clerk, and you're done.
2. Make the software work better on a modern OS and give it modern usability. It's _really_ crap UI, and I only run it in a VM just in case because the download site is also shady looking.
3. Locals gaming the system can make your life harder as a working-from-home small business owner. Many landlords don't pay income taxes on their properties, which means tenants cannot register business addresses at their homes, and must "rent" an address for about $100 / month.
4. Withholdings on foreigners, by default, are artificially high as a "precaution".
5. Refunds process in August after filing in May. Because they still process every return much by hand.
6. Double taxation on people like US citizens. The tax clerk has asked friends of mine, while filing, to show their US tax return to make sure there are not more taxes owed. They can ask, but it's not enforceable. So why do it? Because the tax rate on that income earned elsewhere can be as high as 30-40%! The tax clerk gets to decide how bad of an offender you are. GLHF.
7. If your income goes down compared to the year before as a foreigner, you will probably pay a penalty for "making less money" because they suspect tax evasion. Pay the fine (less than $USD 100) and walk away, or they dig your records hard and you could wind up in a situation like #6 above.
The standard of security is, make the target more expensive to breach than it's worth to the attacker. How much would it be worth to have access to the tax returns of large swaths of the population?
I don't know the answer, but I'm guessing it's easily worth billions of dollars. Foreign intelligence services would very much like that information, as well as sophisticated criminals.
I am very doubtful that Intuit or H&R Block, for example, invest in security sufficient to protect themselves against that level of attack.
See the final paragraphs, which I've copied below - it's essentially a version of the Citogenesis effect.
"Dennis Huang, executive director of the L.A.-based Asian Business Association, also told ProPublica he was solicited by a lobbyist to write about return-free filing. When the lobbyist sent him a suggested op-ed last summer and told him the proposal would hurt small businesses, Huang wrote an op-ed in the Asian Journal that claimed Asian-owned businesses would not only spend more time paying taxes, but they'd also get less of a refund each year.
Huang declined to disclose the lobbyist's name, but acknowledged he didn't really do his own research. "There's some homework needed," he said.
Oregon's Martin did some research on return-free filing and now supports it. She also co-published a post about the issue and the PR efforts related to it because, she says, she was alarmed that other nonprofits could easily agree to endorse a position they did not fully understand.
"You get one or two prominent nonprofits to use their name, and busy advocates will extend trust and say sure, us too," Martin said."
The fact that companies can manipulate the government into keeping the taxes complex is also the government's fault.
Massachusetts had free tax filing, but got rid of it this year. It was really great and fastest refund I ever received.
The finanical might takes away so much from the commons and also pushes back adoption of good Open Source software. We constantly under-estimate the damage done by these mega-corps.
 http://blogs.wsj.com/riskandcompliance/2015/08/12/former-sap... http://www.iafrikan.com/2017/02/21/sap-south-africas-managin...
1. The complexity of the tax code
2. The complexity of filing
They are not the same thing.
You have to go through some process to file now. Let the government go through that laborious process.
For those who are not conflating the two, this point does not apply.
It's been one of their issues for a long time eg http://www.ccianet.org/2011/09/irs-tax-prep-not-a-budget-sol... and http://www.ccianet.org/2002/01/treasury-irs-announce-efforts... ... and it's a major activity of theirs http://sunlightfoundation.com/2013/04/15/tax-preparers-lobby...
> The Computer and Communications Industry Association (CCIA), of which Intuit is a member, represents its members on a wide range of technology issues, but spends a significant amount of its $6.7 million in lobbying on tax simplification. As ProPublica's piece points out, the group operates the website "Stop IRS Takeover" which bashes the idea of pre-filed returns. Disclosures indicate the CCIA has focused on "issues pertaining to tax preparation services" and legislation involving simple filing. It lobbied in support of Rep. Lofgren's bill that would have barred government-filed returns, and rallied against Sen. Akaka's bill that would have let taxpayers file directly through the IRS "without the use of an intermediary."The CCIA is an active political giver as well, doling out over $650,000 over the past 20 years with 91% going to Democrats. Silicon Valley-based Rep. Lofgren, an opponent of IRS-prepared returns, has been the biggest beneficiary of CCIA donation, collecting over $12,000. The group also opposed John Chiang in the 2006 California controller election, chipping in $50,000 to the Alliance for California's Tomorrow the same group that received $1 million from Intuit.
CCIA also gets lots of mentions in https://www.warren.senate.gov/files/documents/Tax_Maze_Repor...
This helps explain why there is an anti-net-neutrality faction. It's not just because people don't understand. And it's not just because people are bought and paid for. There are actually legitimate arguments for privatizing things and then regulating them rather than nationalizing and trusting the government to manage effectively.
I would have expected at least most of western Europe to have reached that point.
> Intuit argues that allowing the IRS to act as a tax preparer could result in taxpayers paying more money.
Majority of work will be done automatically by some free jQuery and PHP scripts (hello Obamacare website) and taxpayers have only shell-out initial cost. Even if TurboTax is $19 per year, I have a hard time believing that 200MM tax payers X $19 will be less than running an enterprise servers for online consumers.
> [...] "STOP IRS TAKEOVER" campaign and a website calling return-free filing a "massive expansion of the U.S. government through a big government program."
I honestly laugh at this one. Just exactly which part of information that IRS process is not already in the IRS possession? With that statement -- they really reach out for the dumbest people hearing them out.
> Explaining the company's stance, Intuit spokeswoman Miller told the Los Angeles Times in 2006 that it was "a fundamental conflict of interest for the state's tax collector and enforcer to also become people's tax preparer."
I have to place a call to intuit maybe they will sponsor my idea that I should fill out and asses my own respondibility when it comes to a parking ticket. I mean you cannot trust the government that they will be fair to you - so I should get note "you violated parking zone - fill out this form and return to us with own assessment of your penalty". Gosh imagine wild wild west we would be living in if you stretch it to criminal law.
Kind of like the current healthcare debate in the U.S., where the problem seems to be "how do we allow insurance companies, pharmaceutical companies, etc. to make maximum profits while providing healthcare for the maximim number of people at minimum cost?"
If you ask me, the correct answer is for the government to either give some one-off compensation to Intuit (or buy the company from its existing shareholders), and then shut it down, and reform the system.
Why should the government pay money to a private corporation or shareholders? Because they created this mess ("opportunity") in the first place with a ridiculous tax code. It's just the actual, realized, dollar-value cost of the mess that was created, instead of externalizing the cost onto taxpayers. Once that cost has been paid, the system can be fixed.
For example, imagine if most baseball teams started excluding non-white athletes again. If I owned a team, I'd immediately start hiring those athletes; I'd have by far the best team in baseball in short order (EDIT: To be clear, I'd have the best team because I'd have the market cornered on a large portion of the talent; it wouldn't be because of some imagined (and false) racial differences in performance.)
But in business (and in sports) it never works out that way. Certainly some executives have discriminatory attitudes, but that doesn't account for all. Certainly some will give into and/or are more exposed to social pressures, but not all. With the very high demand for talent, why aren't there businesses snapping up the older developers?
(There also is another problem: Excluded groups tend to avoid that marketplace. They are discouraged by their parents, teachers and peers, and they may not want to have to deal with discrimination every day for their whole careers. But that shouldn't apply as much to middle-aged developers; that labor supply is already in the pipeline.)
They took that old saw about only the grandkids being able to make the VCR stop blinking 12 and applied it to the entire frikkin world and they are unable to tell that it isn't even remotely true.
I didn't see the original slashdot discussion so didn't have context, but IMHO the phenomenon described is more that programming in the USA and Europe has become a blue collar job, as it has been in Japan for decades. I think this is great -- the tools have become strong enough that someone with limited (but nonzero) training can produce solid apprentice and journeyman work. (masters, well, I know I will never be even close to some of the master machinists I've been lucky to work with; they can't craft code like I do).
The fact is you don't need a programmer to throw together a viable web site. Isn't that great!?
Innovation is still coming out of computer science and trickling down into the real world. And highly experienced programmers are also contributing cool stuff. But as programming has opened up as a discipline to a significantly larger pool of people, is it any wonder the density of new ideas per capita would go down?
(oh, and CS has always scorned reading the literature, to its detriment).
I don't think it was linked in the post.
When I talk to many people under 35, across fields, they often complain about workplace disorder, mismanagement, and lack of planning. They also complain of an overload of work and of working too many hours. I don't think this is a coincidence.
Having spent the last 3 months totally involved in the US medical field regarding a family member's life-critical condition, I can say with confidence: the professional credential/certification process works really well when built well, and software engineers with an interest in seeing our field should strongly consider the medical community as a working example - remember - "rough consensus and running code".
I think some of us take themselves a bit too much seriously. As a programmer, you have not just First World, but actually Zeroth World problems. In the world of struggle we are having a guilt-free easy ride.
Aside from that, I agree with the author's points.
What I realize is that I know it because of life experience and there is no way I could have learned it (or maybe it's a realization, not a learning) without having lived through all those winters.
The impact of this is that as we gain experience, we can be more deliberate and selective about where we focus our talents and energy, entering into fewer endeavors, but have a higher success rate per endeavor.
This is why there are a host of lawsuits for ageism and a bunch of job portal like https://www.giantsforhire.com/ or https://oldgeekjobs.com/ poping up
1) Hackers that grew up with technology that are typically very proficient and employable before entering university or having much work experience.
2) People who went to CS school in order to get a job as a developer because the pay is pretty decent and they were always kind of good with computers.
I'd much prefer to work with people that grew up as hackers than people who learned later in life. It's a cultural thing more than an age thing, but I feel like it's very hard to see the big picture if you weren't indoctrinated into it on IRC or forums when you were a teen. I'd imagine it's sort of like growing up on a farm vs going to agriculture school.
Because of this, it's very hard to tell as people get older which camp they came from. If someone is young and knows a ton, it's clear that they grew up in hacker culture and likely have a breadth of knowledge. If someone studied hard for years and never was part of the culture, IMO they will be harder to work with and not have as much general knowledge.
As a bonus, if you grew up in the culture and you're applying for an entry level job, you probably already have a decade of hacking under your belt, and that's work experience that's totally unaccounted for on a resume.
The result of this is that if you see someone young and surprisingly knowledgeable in many computer related areas, they probably grew up steeped in hacker culture and have tons more useful general knowledge than someone without that extra decade of experience.
BTW, I'm not advocating for age discrimination, but it may actually be more of undocumented experience discrimination than age discrimination.
Popular music, athletics, professional team sports, combat sports, hospitality, beauticians, the military (a big one, that one), until recently airline hostesses... there's tons of them.
"Im not bitter; I just believe weve fallen into a bunch of bad habits in our industry which need a good recession and some creative destruction to weed out what is limping along."
A recession might actually make things even worse. At least part of the problem in the industry is the growing power of monopoly. Firms are increasingly protected from competition, and therefore bad habits can thrive because the bad habits are not enough to bring down the firm. That is, there isn't enough competition for the firm to be hurt by its own bad habits. And this lack of competition goes back to the declining rate of new company formation:
Also, see this chart:
"Research has shown, however, that users respond in very predictable ways to the requirements imposed by composition rules."
I'm not disputing this statement, but there is no reference to the supporting research either. I get that this isn't an academic paper, but I'd be curious to see the research they're referring to nonetheless. Does anyone here happen to know what they may have relied on for that claim?
Interesting to see a cartoon cited in a government publication.
It's in the same spirit, has some things covered in even more detail and it even has tests you can run on your solutions to verify correctness and instantly receive feedback.
If you click into a section, you can run the python code and get the output.
1) Air Force ICBM launch operations: The General in charge had a serious drinking problem (consider that for a moment), to the extent that he went on a bender in Moscow. Among launch officers, there was widespread cheating on qualification tests, disregard for regulations (such as sealing doors to secure rooms), and very low morale.
2) Air Force nuclear bomber operations: At one point, they lost track of a nuclear bomb (or maybe cruise missile), and it was flown to a base in another part of the country before anyone figured it out and could track it down. The Secretary of Defense fired the General in charge.
3) Security at facilities containing highly enriched uranium/plutonium (the essential material to making weapons; the one component that keeps terrorists from making one): At one facility, some peace protestors (not James Bond-level attackers) breached the security and setup a protest next to a building containing the materials. They were there for something like 30 minutes before they were discovered and apprehended.
4) And now this.
This isn't a system that can succeed 99.999% of the time. If one nuclear weapon gets into the hands of someone willing to lose it, millions of people will die and then you can imagine the response - the course of history and civilization will change.
I can think of about 50 more dangerous things I'd rather not pass on the highway.
The truck carrying a nuclear warhead is about a squillion times more dangerous than the warhead itself.
Trucks have killed more people than nuclear weapons ever will.
Umm... is it just me, or does this really not seem like the worst thing that could happen?
You have a truck with nuclear bombs in it, travelling across America... isn't the bomb already in the place that a terrorist would want it to be?
I would think that the worst thing that could happen, is for one of these trucks to blow up - or am I missing an understanding of how nuclear bombs go off?
(...sure, you could call zfs or btrfs "mainstream", I suppose, but when I say "mainstream" I mean something along the lines of "officially supported by RedHat". zfs isn't, and RH considers btrfs to still be "experimental".)
They even spread the metadata across the disk by default. I'm running on some old WD-Greens with 1500+ of bad sectors and it's cruising along with RAIDZ just fine.
There is also failmode=continue where ZFS doesn't hang when it can't read something. If you have a distributed layer above ZFS that also checksums (like HDFS) you can go pretty far even without RAID and quite broken disks. There is also copies=n. When ZFS broke, the disk usally stopped talking or died a few days later. btrs, ext4 just choke and remount ro quite fast (probably the best and correct course of action) but you can tell ZFS to just carry on! Great piece of engineering!
I spent a day chasing what turned out to be a bad bit in the cache of a disk drive; bits would get set to zero in random sectors, but always at a specific sector offset. The drive firmware didn't bother doing any kind of memory test; even a simple stuck-at test would have found this and preserved the customer's data.
In another case, we had Merkle-tree integrity checking in a file system, to prevent attackers from tampering with data. The unasked-for feature was that it was a memory test, too, and we found a bunch of systems with bad RAM. ECC would have made this a non-issue, but this was consumer-level hardware with very small cost margins.
It's fun (well maybe "fun" isn't the right word) to watch the different ways that large populations of systems fail. Crash reports from 50M machines will shake your trust in anything more powerful than a pocket calculator.
My assumption is the read will fail and the error logged but there is no redundancy so it will stay unreadable.
Will ZFS attempt to read the file again, in case the error is transient? If not, can I make ZFS retry reading? Can I "unlock" the file and read it even though it is corrupted, or get a copy of the file? If I restore the file from backup, can ZFS make sure the backup is good using the checksum it expects the file to have?
Single disk users seem to be unusual so it's not obvious how to do this, all documentation assumes a highly available installation rather than laptop, but I think there's value in ZFS even with a single disk - if only I understood exactly how it fails and how to scavenge for pieces when it does.
I scrub on a weekly basis. One day ZFS started reporting silent errors on disk ada3, just 4kB:
pool: tank state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://illumos.org/msg/ZFS-8000-9P scan: scrub repaired 4K in 21h05m with 0 errors on Mon Aug 29 20:52:45 2016 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 ada3 ONLINE 0 0 2 <--- ada4 ONLINE 0 0 0 ada6 ONLINE 0 0 0 ada1 ONLINE 0 0 0 ada2 ONLINE 0 0 0 ada5 ONLINE 0 0 0
2016-09-05: 1.7MB silently corrupted on ada3 (ST5000DM000-1FK178) 2016-09-12: 5.2MB silently corrupted on ada3 (ST5000DM000-1FK178) 2016-09-19: 300kB silently corrupted on ada3 (ST5000DM000-1FK178) 2016-09-26: 1.8MB silently corrupted on ada3 (ST5000DM000-1FK178) 2016-10-03: 3.1MB silently corrupted on ada3 (ST5000DM000-1FK178) 2016-10-10: 84kB silently corrupted on ada3 (ST5000DM000-1FK178) 2016-10-17: 204kB silently corrupted on ada3 (ST5000DM000-1FK178) 2016-10-24: 388kB silently corrupted on ada3 (ST5000DM000-1FK178) 2016-11-07: 3.9MB silently corrupted on ada3 (ST5000DM000-1FK178)
The next week the server again became unreachable during a scrub. I could access the console over IPMI but the network seemed non-working even though the link was up. I checked the IPMI event logs and saw multiple correctable memory ECC errors:
Correctable Memory ECC @ DIMM1A(CPU1) - Asserted
MCA: Bank 4, Status 0xdc00400080080813 MCA: Global Cap 0x0000000000000106, Status 0x0000000000000000 MCA: Vendor "AuthenticAMD", ID 0x100f80, APIC ID 0 MCA: CPU 0 COR OVER BUSLG Source RD Memory MCA: Address 0x5462930 MCA: Misc 0xe00c0f2b01000000
The reason these memory errors always happened on ada3 is not because of a bad drive or bad cables, but likely due to the way FreeBSD allocates buffer memory to cache drive data: the data for ada3 was probably located right on defective physical memory page(s), and the kernel never moves that data around. So it's always ada3 data that seems corrupted.
PS: the really nice combinatorial property of raidz2 with 6 drives is that when silent corruption occurs, the kernel has 15 different ways to attempt to rebuild the data ("6 choose 4 = 15").
In both sense of the word.
Many moons ago, in one of my first professional assignments, I was tasked with what was, for the organisation, myself, and the provisioned equipment, a stupidly large data processing task. One of the problems encountered was a failure of a critical hard drive -- this on a system with no concept of a filesystem integrity check (think a particularly culpable damned operating system, and yes, I said that everything about this was stupid). The process of both tracking down, and then demonstrating convincingly to management (I said ...) the nature of the problem was infuriating.
And that was with hardware which was reliably and replicably bad. Transient data corruption ... because cosmic rays ... gets to be one of those particularly annoying failure modes.
Yes, checksums and redundancy, please.
Unfortunatly, btrfs is not stable and zfs needs a "super computer" or at least as much GBs of ECC RAM as you can buy. This solution is designed to any machine and any FS.
(Disclaimer: Google employee, unrelated product area)
No idea what power draw it has (probably $5 a month?) but I have a large solar array so I doubt it costs me anything to actually run. I also get to recycle some old hardware.
EDIT - Seems that post was linked to by the pi-hole project itself at some point. Was wondering why it got so much traffic each month.
Pay attention: this might interfere with some google functionality because it will block google ad-services. You'd have 2 workarounds: switch your searches to Duck-Duck-Go or keep google ad-services out of the disable DNS addresses.
You do have to do more initial setup with certificates and such, but IMHO the more fine-grained filtering is worth it. The entire category of sites which actually detect blocking can be worked around this way, as you can filter out that code too.
With block you receive a UI notification of the block and the option to make an exception.
And before ads advocate responds - ask for my money instead of polluting my mental space.
I know it's a lot more work but setting up Shadowsocks and Unbound with similar DNS blacklists is a much better solution. This also comes with all the benefits of using a VPN (technically, an obfuscated socks5 proxy using the android VPN interface). If you manage the network, pfSense and pfBlockerNG are also great and easy to set up.
 https://adguard.com/en/adguard-dns/overview.html https://github.com/jedisct1/dnscrypt-proxy/blob/master/dnscr...
You can use the custom rules to add any of the adblocker lists such as HP hosts, etc.
Also whitelists play nice with voip apps too.
If you purchase a 512MB RAM machine for two years it is 0.99/month!
I'll post my usual story:
In February 2016, SpiderOak dropped its pricing to $12/month for 1TB of data. Having several hundred gigabytes of photos to backup I took advantage and bought a year long subscription ($129). I had access to a symmetric gigabit fibre connection so I connected, set up the SpiderOak client and started uploading.
However I noticed something odd. According to my Mac's activity monitor, SpiderOak was only uploading in short bursts  of ~2MB/s. I did some test uploads to other services (Google Drive, Amazon) to verify that things were fine with my connection (they were) and then contacted support (Feb 10).
What followed was nearly 6 months of "support", first claiming that it might be a server side issue and moving me "to a new host" (Feb 17) then when that didn't resolve my issue, they ignored me for a couple of months then handed me over to an engineer (Apr 28) who told me: "we may have your uploads running at the maximum speed we can offer you at the moment. Additional changes to storage network configuration will not improve the situation much. There is an overhead limitation when the client encrypts, deduplicates, and compresses the files you are uploading"
At this point I ran a basic test (cat /dev/urandom | gzip -c | openssl enc -aes-256-cbc -pass pass:spideroak | pv | shasum -a 256 > /dev/zero) that showed my laptop was easily capable of hashing and encrypting the data much faster than SpiderOak was handling it (Apr 30) after which I was simply ignored for a full month until I opened another ticket asking for a refund (Jul 9).
I really love the idea of secure, private storage but SpiderOak's client is barely functional and their customer support is rather bad.
If you want a service like theirs, I'd suggest rolling your own. Rclone  and Syncany  are both open source and support end to end encryption and a variety of storage backends.
Am I running your software in a process that has network access? Then you can access my data.
I understand the point you're trying to make, and I totally get that architecting a system so that unencrypted data doesn't leave my device is superior to an architecture where it does.
But I still must, ultimately, trust you, your competence, and your motivations. If I trust that you don't want to access my data, and have tried to architect your systems so that is hard to do, and are competent to do so, then I can trust my data is probably safe.
But that's not the same as it being physically impossible for you to access my data.
Why the collision with academic cryptography doesn't matter: Anyone who had even a basic understanding of their product + some academic crypto background would get what they were going for: they have no knowledge of whats going on. Also, strictly speaking,the academic term is zero-knowledge proof or zero-knowledge proof of knowledge. Ie, zero-knowledge is an adjective used to describe a proof (and indeed, if you look at the history of how these evolved, that is exactly what happened). You could reasonably use zero-knowledge as a modifier for something else and it could be acceptable. Indeed, it's a fairly good shorthand for a particular class of definitions of privacy/confidentiality that require any transcript of the protocol can be produced by a simulator who has no knowledge of what transpired.
The problem is Spider Oak's cloud backup cannot be zero-knowledge or no knowledge. It almost certainly leaks when you update files and when you delete them. Perhaps they don't log this or delete the logs, but they could. And this meta data could matter to businesses.
Zero Knowledge is something you're most likely going to find in an authentication protocol, not an encryption protocol.
While I have mixed feelings about "No Knowledge", it's at least not a collision with a different concept.
Good on SpiderOak for the effort here. It shows they do listen, at the very least.
This is clearly not "no Knowledge"....
Looking forward to standing corrected if I am wrong.
How is that possible?
So, why did they use it?
I guess a real "No knowledge" storage would be just a container and you read and write blocks, i. e. the filesystem format is implemented on the client side. Of course this make features such as versioning difficult to implement and probable everything would be a little bit slower.
Edit: The post from rarrrrrr explains the technique of Spider Oak and he links to a Blog entry. This is pretty impressive.
Anyone else pick up on this hilarious irony?
Hacker News is becoming a comedy site similar to The Onion.
I tried spideroak and liked the product, but the inability for me to pay using anonymous payment methods has lead me to use sync.com instead.
I had trouble with a prepaid debit card, and they unfortunately don't take bitcoin either.
After a timeout, you don't know how the request that blocked ended. If you need that, a multi-thread or multi-process solution is indicated.
In general, QNX is much better at timing, hard real time scheduling, and inter-process communication than UNIX/Linux. In the Unix/Linux world, those were afterthoughts; in QNX, they're core features. Because all I/O is via interprocess communication, it takes an extra memory to memory copy; this seems to add maybe 10% CPU overhead. That's the microkernel penalty, a moderate fixed cost.
Have you ever had a process waiting for disk that you couldn't kill? Like running "ls" in a big directory, which hangs, and you pound ^C over and over again in vain? That's this same assumption at work here. It's the difference between D and S states on Linux. So if you want timeouts on mkdir(), what you really want is a complete reengineering of the Unix system with the philosophy that local disk IO is interruptible.
Anyway, as my late systems professor used to say: timeouts are always too short or too long.
In Erlang, all I/O goes through things called "ports", and does not block, but it's all asynchronous. There is no reason you couldn't do the same in C -- spin up an external process with a SIGALARM, and maintain a socket that you use, instead of dispatching the syscalls in the same process / thread you run the rest of your code.
In Go, syscalls are scheduled on their own thread (typically), and you can run a goroutine, and wait for the output of the call.
Yes, these bugs exist, and they sometimes make writing code a pain, but they're mostly "solved" if you use the right tool.
But: What possible semantics could you want for mkdir() such that a timeout is sane?
This isn't a trick, this is expected. Calls return when they're supposed to return, or not at all. If you need to return BEFORE the system call is done, you need to do it somewhere else (like in a new thread or process, or node). This is also not limited to I/O, but basically any system call: if you return before it's done, it may break something, so it might not provide a good timeout method.
Something to ask yourself is also why you need to return before the call is done? It's similar to the NFS hard-vs-soft-mounting argument. Soft mounting can cause damage when improperly interrupted; hard mounting prevents this by waiting until the system is behaving properly again, with the side effect of pissing off the users.
The big trouble you have is that there is no approach to signals that composes; if someone else is waiting for a SIGCHLD via some other means, you're likely to eat the signal they were waiting for. If you're particularly unlucky and both children terminate while you're not scheduled, you'll only get a single SIGCHLD - even with siginfo_t, all that means is at least this child exited, maybe others did too.
Perhaps the better approach is to just spawn a thread for this, which is also the general-case answer for every other syscall, whether it's wait(), mkdir(), readlink(), or sync_file_range(). On the other hand, threads are a "now you have two problems" sort of solution.... I'd like to see a generic API where you can submit any system call to the kernel, as if it were on another thread, and select on its completion.
If anyone wants to learn more about AI chips, I'd be happy to answer questions.
How far behind is AMD with GPGPU, or AI.It seems OpenCL is a dead end. CUDA won. AMD announced some CUDA to (x) code conversion which never really caught on.
And the reason people don't own as much as years before is because we realize that we can't afford to. If you give me a mansion for $10 (without any zeros behind it) I'll gladly prefer it over my apartment.
What will be the next article in the series? Children in Jemen prefer to eat less food than their bodies require?
The article is saying that suburbs are becoming more like urban areas.
I like the author's evidence that housing prices are falling:
> In that same city in 2012, a typical McMansion would be valued at $477,000, about 274% more than the area's other homes. Today, a McMansion would be valued at $611,000, or 190% above the rest of the market.
Up 28% in price - must be dying!
It's not a detail that changes the conclusion, but I want to focus on it:
The U.S. definitely can afford to maintain its roads. The country is the richest in the history of the world, richer than it's ever been. U.S. GDP in 1960, when the Interstate Highway system and many suburbs and malls were being built, was ~$3 trillion in real dollars; now it's ~$18 trillion.
The U.S. chooses not to maintain its infrastructure.
 According to one unofficial source on the web, which I'm going to trust for this purpose.
I prefer to have the privacy that's afforded by having my house set 40' back from the street, with plenty of space between my and my neighbors' homes, rather than living in a house that abuts, or is 6 feet away, from my neighbors'.
I prefer a spacious driveway and an attached, heated, two car garage, to parking on the street a block from my house, and having to dig my car out after a snowstorm.
I prefer to own a home that was built during or since the 1950s, when Romex wiring was original equipment, rather than own something that still has knob and tube wiring with decaying fabric insulation.
I prefer a home built with drywall or rock lathe and plaster, over one built with wood lath and plaster.
I've found that most pre-1910 homes I've examined do not have foundations or basements to my liking.
I prefer to live in an area where I can leave my doors unlocked and my windows open at night without giving it a second thought. Or where, if I leave the house and realize I left the front door unlocked, I don't feel compelled to go back and lock it.
I prefer to live in an environment where junkies and bums are virtually nowhere to be seen, rather than an environment where I have to be careful not to step in human feces when I walk between two parked cars.
When I can get all that affordably in the city, I'll consider moving back.
I didn't always feel this way, but I think the use of cellphones has created and environment where even slow moving drivers are unpredictable. I now walk to work and frequently see people on their phones while driving, if a line of ten cars are stopped at a light at least three people are on their phones. The result isn't an extreme increase in accidents but a constricting deficit of attention which incrementally lengthens every encounter on the road. More short stopping, more people missing green lights or just driving super conservatively and not merging holding up traffic.
Also I never use my phone while driving, but the fact that I can't even look at it when I know someone is trying to contact me is incredibly frustrating and impacts my enjoyment of driving.
They don't have money for restaurants, let alone the resources to pick up a hobby as expensive and time-consuming as golf.
Also, the no.1 reason why people seem to want to move to suburbs is children (IMO, anecdotal etc.)... perhaps the falling fertility is making married/live-in couples more willing to live in a dense, urban communities?
Living in vans. Living in boxes in larger apartments. Thirty people living in a 10-bedroom building.
There is vast demand to live in these cities, and their governments are utterly failing to accommodate it in an orderly and dignified fashion.
So I live in Metrowest Boston Area. I own a multi family. My wife, infant son and I live in modest square footage despite having fairly generous annual incomes (e.g. well above AMT tax rate).
My parents used to live nearby in affluent town (Wellesley) before just recently selling their house to a developer... who of course knocked it down to build a gigantic pimped out house. The same developers was buying 4 other houses at the time. In short Metrowest Massachusetts has become tear down central.
The problem is other than foreign investors (a lot of Newston, Weston, Wellesley, and Needham is getting bought out by foreign investors) nobody from my age group.. the age group that is looking to buy houses wants a super mansion... even if its decked out (That is these houses aren't McMansions. They are real mansions).
From my general experience of talking to other educated thirty somethings is people want authentic and charming not McMansion and yet there all these new houses replacing old New England house.
Yet these new houses are priced ridiculously high. And again talking to other peers it seems gone are the days where people buy 4-6 times their salary. People are buying sometimes as low as just twice their salary and completely fine. And if people want convenient they are going for condo and not McMansion style.
Historically New England has been fairly bubble proof. I fear that time is coming to an end.
When it does happen I might buy some of these big boys and convert them to multi-families ... BTW this is exactly what happened to parts of New England in the late 19th and early 20th century  where giant houses were converted to multi families aka Somerville and Waltham (where I live).
 - https://en.wikipedia.org/wiki/The_End_of_Suburbia
 - https://www.youtube.com/watch?v=Q3uvzcY2Xug
You know, it's funny. Some cities are trying to quickly build out higher-density apartment buildings in/near the downtown areas, and from what I've seen the same concept applies. I'd love to see some of those fancy mini-arcologies spring up, but really the boxes are just a bit bigger (or smaller, depending on how you look at it.)
I do hope that malls become very cheap real estate before long; I think it's already happening in some places. I would just love to buy what is essentially a cheap, empty, commercially-zoned warehouse and stuff it full of things like laser tag, arcades and e-sports, a couple of lounge areas and coffee shops/bars depending on the time of day...but I doubt you could make something like that work without very cheap square footage and a lot of people within a 30-45 minute drive.
I mean, currently NYC has a ridiculously low carbon foot-print for its population size. So if more places become dense population centers with public transportation like NYC, it can only be a positive thing in terms of mitigating climate change.
I recommend anyone interested in this to read Open Standards and the Digital Age, by Andrew Russel. The book is a partial refutation of the idea of an 'open web' using historical examples, the most shocking being the failure of open and democratic methods to build an open Internet standard versus the success of Serf and Kahn inventing TCP/IP in a closed and corporate environment, funded by the military. The reality is that some systems critical to the operation of the Internet, such as DNS, are highly centralized, un-open, un-private and un-free, or at least when compared with cyberlibertarian expectations of how the Internet should be. It also addresses other perversions of 'open', such as the irony that some of FLOSS's biggest customers are megacorps like Apple and Microsoft, who up until recently contributed way less back than what they took.
I don't think the web was 'stolen' from 'us', I think people just don't realize how controlling it was before. They're mistaking an epiphany for an actual loss of freedom that may or may not have been there in the first place. We need to fight for a free and open Internet, but let's not kid ourselves with inaccurate and misleading language.
Most people I know in tech have to be on Slack. Most people I know in advertising and PR have to be on Facebook, Instagram, and Twitter. For some jobs, like journalism, even the "troops on the ground" are required to post and tweet. Even for jobs without the social media taint, a lot of companies use corporate gmail, so now Google is way up in your business.
In 2007, you could probably just chalk it all up to poor personal choices. In 2017, I don't know if that's entirely true. We're in a situation that cries out for regulation, although that will probably not happen in the US until after a calamity, since regulation is seen as one of the heads of the beast in our money religion.
Plus using data to improve the user experience, recommendations(YouTube, Netflix), targeted ads seems pretty neat(I prefer ads about tech products, my interests compared to makeup or pads). The products get better and improve. Facebook doesn't sell your information, they let advertisers use it to target you. Probably more profitable to let companies use the data instead of selling it.
Companies like LexisNexis and Acxiom are the ones I'd be really worried about. Some states DMV's even sell databases. I'd be more worried about them, Google and Facebook are way better corporate citizens than these mega databroker companies would be. At least Google and Facebook you can opt-out of. LexisNexis, good luck opting-out. Last time I checked only law enforcement who fear they are in danger can opt-out.
Regulations are what kills innovative. Probably since the government has mainly left the internet alone is probably why it's one of the most innovative industries. Imagine having to read a 300 page 2 column paged document and wait on a lengthly process before you are even allowed to put up even a blog.
Then all of this talk lately about "fake news" just seems like censorship. I am worried that some day the internet will be over regulated and censored it will be just like cable television at some point.
If you want to reboot the web then you need to reboot the internet first, solve the insecurity of privately hosted servers first and convince ISPs that symmetric connectivity should be the rule.
After that you have a fighting chance.
If unethical practices become normal, the thing to do is to get a law passed. It's the way this has always worked. Laws change the entire landscape of commerce. They shake things up enough to where a new status quo is found. Law isn't perfect but it can shift the ethical regime more in the direction of the people.
The author's recommendation of a world without kings is a fantasy. If you eliminate hierarchy that means everybody must become an institution. Being an institution is not fun. It's fun to fantasize about building your own house but only the really motivated actually do it. Kings do us a favor by creating structure where there once was none. Silicon Valley is ultimately a force for good.
I know sometimes it's easy to play the evil mega-corp card, but we need to ask ourselves the question: what is the goal here, to take down Google and Facebook? 'Cause if you're worried about an internet with extra surveillance and restrictions, taking down Google and Facebook doesn't really solve things.
Plus, even in a world with Google and Facebook out of the picture, there will still be political trolls hired by other companies and nation-states. There are also alternate-Googles that can just swoop in and fill the void you create if say you do take down Google. They are not necessarily better than Google today.
Edit: not saying it is, but what about the WA deal?
That said, I think there's a real market for closed content like FB's. And even though I find a decentralized system more appealing, I can imagine a new, closed/centralized system taking FB's place in the future.
That said, I personally agree with the author in identifying the main problem of the web as people tracking. In my opinion, Tim Bernes-Lee points about misinformation and political advertising are not specific to the medium, but rather to the times we live in. People are pissed, people are scared, they need something to blame, they need some fantasy to believe in, they make up scary news, they vote for the guy that gives them a dream.
What's specific to the web though, (and that is starting to spread out of the web) is the data tracking. Whether for advertising ends or for surveillance purposes, data tracking creates a power imbalance between people and systems that is unbearable.
That power imbalance is the weirdness you felt the first time you saw a gmail ad related to the email you were reading. It's the anger that heats up your cheeks when the sales guy asks for your email address when you just want to buy shoes. It's the 2-hour phone call to the customer service that ends in "I'm sorry there is nothing I can do for you". It's the "late fee" mails you automatically receive for a service that you cancelled. It's realizing that the app your employer installed on your phone can tell them your location at all times. It's the swatting that reminds you not to shop for pressure cookers online. It's the cameras. It's the cars. It's the lightbulbs.
We as people are weak. I don't think Silicon Valley intended it that way. I think they genuinely wanted to improve the world. And in order to keep it cheap, they found money where they could, and in the process, they undermined people's privacy in a way that is making the world a lot worse than it was.
I personally feel hopeful. Countries are made of people. And I think that people are starting to get it. We need rules to prevent this. Laws that force companies to automatically give you the option not to track you. The same laws that forced mailing list senders to have the unsubscribe button (thank god for the unsubscribe button!). For this to happen, we need lobbying, we need awareness. We need a "this website is not tracking you" label. We need privacy checks.
If you're using services that support surveillance capitalism or you are working on such products, please stop. Thank you!
How many people here still use usenet vs reddit/HN ?
I will guess that the 'cruel myth' is that anyone can spent the requisite time and do anything. (Some people just don't get / can't get X, even if they want to and try really hard.)
However, bear in mind that plans are rarely followed to execution perfectly. You may meet someone who wants you to stay, or you may get a really good offer. You might experience financial hardship and need to settle down for a while.
When I go on a hike, I spend a good hour or two studying maps (topographical, orthophoto, etc) before picking a trail. It means that I can decide on a whim to follow another trail halfway through if conditions call for it (mud, rain, wild animals, etc). Planning is about mapping out all possible outcomes, and not so much about following one plan to the letter.
it doesn't work very well for goals that are hard to measure but it can be applied in a lot of situations. Good luck tackling all your goals :)
The issue is that we can't know what's real any more. It used to be if you saw a video or a photo depicting an event you could be pretty sure that what you're looking at actually happened.
Now, if you see a video of a prominent politician saying something awful in your twitter timeline (or whatever), they may have actually never said anything remotely close. It could be a completely fictional video that looks perfectly realistic, made by some teen in Macedonia.
I realize photography and video have always been used to trick people into thinking things that aren't true, but this technology enables nuclear-grade deception.
I am wondering: is there a use-case for such an algorithm that is practical and good for the world?
PS: I know an eye-rolling algo is quite innocuous but I've had this thought on my mind about these in general and needed to air it out.
 https://www.youtube.com/watch?v=ohmajJTcpNk https://www.wired.com/2017/02/veles-macedonia-fake-news/
Sample result: http://imgur.com/a/nyG4Z
Boats are supposed to switch over to a cleaner fuel when they enter port. For example, Port of Oakland is upwind of residential housing in Oakland. So this is a public health issue. Even the terminal tractors (port trucks) idling are an issue. Hopefully they'll switch over to EVs:
Boats are designed for a critical hull speed. Emma Maersk cruises at 31 mi/h on the open ocean.
That bulbous nose on container ships sets up a counter bow wave to lower drag but only at a certain cruising speed. However, shippers weren't paying a premium for that higher speed and although it's more efficient for that hull it was still costly.
So new boats are tuned to a more efficient lower speed (slow steaming) with less powerful engines and even older boats are getting hauled into dry dock and re-nosed for a lower speed. Overall shipping speeds are down and shipping costs are also down.
While the new Panama Canal extension could be a fiasco in its own right (100 years later and not nearly as well built; it leaks) new canals could improve things. The Thai Canal could make the Suez route more competitive than the Panama route for Asia to Europe.
Lastly, like airlines, it's really hard to make money in shipping. Witness the Hanjin bankruptcy:
The City of Oakland owns the Port of Oakland and we don't make much money off of it either. $16M/yr for both the airport and the port, last time I checked.
So, why should we care? Presumably the banks have paid analysts to determine that was a sound investment.
If governments are doing their jobs, banks should be able to eat this kind of loss without becoming insolvent. Otherwise why bother having regulations at all, if every minor hiccup means taxpayers have to bail out the banks?
Why do I care if shipping companies go out of business because of over capacity? Isn't that what market forces are all about?
So we should keep dangerously polluting ships running, because the banks that loan the shippers money will lose their shirts for several quarters if the shipping company goes bust?
The pollution has more to do with the type of fuel used.
And it seems the fix is to urge the companies to update their ships by not allowing them in ports, but considering how long these articles have been coming out, it looks like progress is slow on that front - and if it has changed. Shipping companies have been selling off some of their stock, and it would seem that at least a few of the older ships should have been included.
Oxides of sulphur are not greenhouse gasses. Nitrous oxide is a greenhouse gas but it doesn't come from burning fuel.
These links go into the actual reasons these sorts of pollutants are bad:
Interestingly enough, there is some thought that nitrogen oxide emissions from ships actually cause global cooling.
A crude search yields this about Emma Maersk, one of the largest container ships.
She is powered by a Wrtsil-Sulzer 14RTFLEX96-C engine, the world's largest single diesel unit, weighing 2,300 tonnes and capable of 81 MW (109,000 hp) when burning 14,000 litres (3,600 US gal) of heavy fuel oil per hour. At economical speed, fuel consumption is 0.260 bs/hphour (1,660 gal/hour). She has features to lower environmental damage, including exhaust heat recovery and cogeneration. Some of the exhaust gases are returned to the engine to improve economy and lower emissions, and some are passed through a steam generator which then powers a Peter Brotherhood steam turbine and electrical generators. This creates an electrical output of 8.5 MW, equivalent to about 12% of the main engine power output. Some of this steam is used directly as shipboard heat. Five diesel generators together produce 20.8 MW, giving a total electric output of 29 MW. Two 9 MW electric motors augment the power on the main propeller shaft
So you need about 285 Tesla Models P100D motors to power a ship of this size. Doable I guess. Again, I'm no expert on shipping.
Why is it so difficult to measure fuel consumption on ships?
"Hence the interest in new green-lending structures. ... The idea is to share the fuel savings between the shipowner and the charterer over a longer contract, giving both an incentive to make the upgrades. Such schemes used to be thwarted by the difficulty of measuring exact fuel consumption on ships. New technologies allow more accurate readings."
This is the exact same problem that arises in landlord/tenant relationships when it comes to things like insulating a property. Insulation might be relatively cheap and pay itself back in a few years. But the landlord doesn't have an incentive to insulate because the benefit goes to the tenant. The current tenant also won't insulate because they'll probably leave before they can realise all the benefit of their investment.
In theory, landlords or shipowners should have an incentive to invest, since it should improve their property and therefore allow them to increase their rents or charter fees, but for some reason this doesn't happen. Possibly consumers can't accurately assess the value of improvements so they are reluctant to pay more.
The measurement devices mentioned should allow both parties to have a more accurate way to share in the benefits.
It's a complicated dance of incentives and information...
It seems like oil gets refined with gasoline going to cars and heavier fuels going to ships. Can we really say that cars are so much cleaner? Their fuel is surely subsidized by a market for the heavier fuels.
128 points by danboarder 457 days ago | 65 comments
edit: many have responded calling residual fuel a "waste product" - it is useful and being used so calling a waste product strikes me as semantically incorrect. If it were being sold opportunistically, like a large proportion of it was going to waste but some was being sold, I would agree with that, but it seems like it's all being sold, right?
The initial build & deploy times for Clojure on Android are annoying, but using a REPL to edit your Android app dynamically is amazing.
It's sad when a clean slate project for a platform created when we should have known better decades ago still ends up with this kind of overhead.
It perfectly matches my recollection of doing Android development too: a big uncontrollable mess of different kinds of files whose inter-dependencies and relations are completely opaque.
Recombinant silk is a pretty cool platform upon which an entire industry can rationally and rapidly tune the properties of new fabrics.
Play with the protein, ADF3, orb-weaver dragline silk https://serotiny.bio/notes/proteins/adf3/from Dan's original paper: http://onlinelibrary.wiley.com/doi/10.1038/msb.2009.62/full
Article here: https://www.bloomberg.com/news/articles/2015-06-03/a-bay-are...HN thread here: https://news.ycombinator.com/item?id=9659712
I guess that technically they're just a little late, but honestly an ultra-limited run of ties without any impressive spidersilk properties makes me think that not a lot has changed in the last almost-two-years.
We'll see if Bolt can actually make a real serious product. As the article points out, there have been a lot of swings at this particular pitch.
Just because something is light and super strong does not make it comfortable. Otherwise we would all be wearing nylon all the time (or some derivation... nylon is super strong and of course there is Kevlar as well).
The problem is nylon is pretty damn uncomfortable.
I remember seeing something on PBS where they showed spider silk and I admit it seemed compelling and I can easily see it probably mixed in with other textile yarns.
Of course why stop at spiders and not modify geese, goats, and sheep for their incredible and generally efficient fabric production.
Synthetics still have a hard time beating goose down for warmth. Is spider silk going to be better for high loft? I doubt it.
For the debate on ng2, I have been using ng2 to build Huula for about a year, which has some amount of ng2 code. So here are my two pennies for folks who are considering using it. I don't like to be constrained by a framework, so angular's routing system always stands on my way (including ng1), so I ditched angular routing entirely. I never used angular's inline styles either, for me sass works better. Ng2's change detection can be stupid sometimes, especially for stuff similar to drag drop, but there are solutions to those cases, although a bit clumsy and convoluted, so be careful .
Most of the time I've seen something Angular-related appear on HN, there's generally a facet of the community lambasting it as too big/too enterprisey.
I don't personally use the product, but I find the source well written and always share it as an example of nicely done C code.
Here is some nice Erlang code I like -- network packet parsing:
Notice how concise this is:
codec( <<4:4, HL:4, ToS:8, Len:16, Id:16, 0:1, DF:1, MF:1, %% RFC791 states it's a MUST Off:13, TTL:8, P:8, Sum:16, SA1:8, SA2:8, SA3:8, SA4:8, DA1:8, DA2:8, DA3:8, DA4:8, Rest/binary>> ) when HL >= 5 -> ...
Another thing here is that it is also big endian by default so there is not need for htons(), htonl() and such functions sprinkled throughout the code.
So, reading someone else's code can be like reading a model of something that has gone through dozens of iterations, where the bottleneck isn't really understanding "the code", but understanding the thing that is modelled, which the people writing the code are intimately familiar with, unlike you.
In my opinion, developing this modelling skill, in yourself, is much more important than watching the result of someone else exercising their modelling skill. A lot can be learned from observing someone else's solution, but this will always be secondary to learning how to craft your own.
 For example: try reading compiler code. People have been writing compilers for so long that reading and understanding this code isn't about understanding the actual code, but about understanding how a compiler is modelled (scanning, lexing, parsing).
> Seibel: Im still curious about this split between what people say and what they actually do. Everyone says, People should read code but few people seem to actually do it. Id be surprised if I interviewed a novelist and asked them what the last novel they had read was, and they said, Oh, I havent really read a novel since I was in grad school. Writers actually read other writers but it doesnt seem that programmers really do, even though we say we should.
> Abelson: Yeah. Youre right. But remember, a lot of times you crud up a program to make it finally work and do all of the things that you need it to do, so theres a lot of extraneous stuff around there that isnt the core idea.
> Seibel: So basically youre saying that in the end, most code isnt worth reading?
> Abelson: Or its built from an initial plan or some kind of pseudocode. A lot of the code in books, they have some very cleaned-up version that doesnt do all the stuff it needs to make it work.
both in python, but beautiful code, well structured and you would not need any docs, just read the code
I particularly liked Brian Kernighan's description and implementation of a regex matcher, and Travis Oliphant's discourse about multidimensional iterators in NumPy.
Worth a read.
Additionally, I've on occasion consulted the Linux kernel and PostgreSQL repos. Would recommend, although I'm definitely either lying or very ignorant if I said I was familiar with them.
During the prototype phase, anything goes, speed is paramount, you just have to make it sort of work. Don't get hung up practicing pretty looking code during this phase.
You should always practice, but don't get hung up.
Prototype given the green light? Ready to dive into the build phase?
I'll say this, google for "coding style guide"
It's in Python and a single file but it comes with a wonderful description and shows how a complicated task can be broken down into a few small and powerful functions.
For example, you can see https://github.com/xyncro/freya-optics/blob/master/src/Freya... and can immediately see the clarity and the consistency of the writer.
Not only does it contain great code snippets but it also covers repeatable techniques that will ease your life as a ROR programmer immensely.
- Ruby Tapas: https://www.rubytapas.com/- Destroy all Software: https://www.destroyallsoftware.com/screencasts/catalog - looks like it is a bit more expensive than it used to be, but there is a lot of good stuff in those first 5 seasons.
A tiny C-subset JIT:
It might even be controversial to suggest these are examples of "good code" today, because the majority of code I've seen lately seems to be overly verbose and complex. In contrast, these are extremely simple and concise for the amount of functionality they contain. I think this style has unfortunately disappeared over the decades of promoting lowest-common-denominator so-stupid-even-an-illiterate-could-read-it coding styles and enterprise OOP design-pattern bloat-ism, but when I think of "good code", I don't think of Enterprise Java; I think of code which, upon being told what it does, makes you think "wow, I never thought it would be so simple."
Reference http://www.dpdk.org and https://github.com/freebsd/pkg
 - https://github.com/tomchristie/django-rest-framework/
A single file running both in python 2.7 and 3.
The same source can produce compilable code, or formatted comments.
Also, it is fascinating to consider how Go routines were implemented as part of Clojure core. Huey Petersen wrote a fantastic analysis of how state machines were used to implement Communicating Sequential Processes:
You can read along while looking at the source code here:
This is a great implementation of Communicating Sequential Processes, first described by Tony Hoare in 1977:
I assume that as a developer you are interested in solving (business?) problems through the act of writing software?
It isn't much different than being a painter I guess. To be able to be a good painter (or to be considered a good painter) you first need to have a good grasp on how to use the brush and how to handle paint (e.g. oil paint), i.e. you need to learn the technique. The more versed you become with the technique the better you will become at painting, or, over time you will become better at painting what you intent to paint, to paint what's in your minds-eye because you don't have to think about the brush and paint anymore.
When it comes to software you first need to have a good grasp on programming. This means you will need to spend time practising the act of programming. Using two languages that are very different from each other might be good. E.g. learn an imperative and functional language. In your case this might e.g. Ruby and Lisp. Your programs will need to interact with other systems so you probably need to learn about operating systems, databases, queues, networking, etc. You probably don't have to be an expert in everything but being a good all-rounder will certainly be beneficial.
Over time you will see that it becomes easier to think in solutions of the bat rather than focusing on how you're going to solve a problem. This is basically what is being referred to as experience.
So, to be a good developer you need to put in the effort and you need to put in the time. There usually aren't any short cuts. I've been doing this professionally for over 20 years and I'm still learning every day.
My thought is that your server have to continuously check for tweets from user timeline and send any mentions to their email everyday / 2-day/ week. There is a daily cost for each user and it will only increase as more user uses your service.
If we want to build broadband to poor neighborhoods, we should just tax people and have to government build it. The municipal franchising model is an awful way of accomplishing that goal. In return for building broadband to poor neighborhoods (where most people can't afford it anyway), you basically kill competition. Nobody but the incumbent can make enough money in an environment where they have to build everywhere in order to receive permission to build anywhere. And it basically bans "MVP" models of deployment, where a provider starts in a focused area with demonstrated demand and expands gradually.
...and watch AT&T lobbyists start crying and screaming about the "unfair" competition from local governments. Rememeber, government is good when it helps the rich, but bad when it helps the poor!
I suffer from moderately severe diabetic foot pain. My doctor prescribed some medication that provided marginal, at best, relief. About a year ago, I had a persistent fever. I took some aspirin to knock down the fever (I almost never took aspirin before that) and discovered that it provided significant relief from the foot pain. More recently, I had a bad cold, and I took some cold medicine that includes acetaminophen, something that I also almost never took before that. It seems to help even more than aspirin. I am wary of taking acetaminophen regularly, because of the warnings about liver damage, but it is something I am considering for those times when the foot pain is severe.
What I would recommend for people suffering from these kinds of problems is that they embrace the idea of Quantified Self. Start keeping a diary of everything. The things you eat, the things you do, and the symptoms you experience, in excruciating detail. Then start looking for patterns and correlations. Put the data into a spreadsheet, generate charts. Learn statistics, learn about correlations. Learn Bayesian Probability. You may be able to find the specific triggers that cause your symptoms, and maybe the things you can do (or avoid) to reduce your symptoms.
2 years later I grew giant, hard skin patches on my knees. My fingernails started detaching. I could live with it, although as a gym rat I needed to make sure my knees were wrapped since doing anything on my knees ripped the skin open.
My fingernails continued to go through phases of detachment and normalcy, and I've lived with the limited skin problems because they were limited. But just in the last year I've grown giant scaly patches up and down my back, and now there are spots growing, expanding across my forehead and nose. I can see how people look at me now, there's something clearly wrong with my face.
I'm terrified. I've always had general fatigue that seemed to match flareups in my skin. While the angriest, spreading psoriasis + fatigue comes in waves, each wave seems worse than the worst that came before. I'm scheduled to see a dermatologist in a few weeks (in Canada, I've had to wait 3 months just be able to have this appointment) but my deepest fear is that I'll be told this is my life now. Even though I know that's the answer, I know there's no cure for psoriasis.
I don't know why I'm typing this, except perhaps that I realize that both the author and I are on the same hunt, and not finding much of anything that can help us feel like we're more than 80% (or 60% or 40%) of the person we were as teenagers. But perhaps there is hope that I'm seeing more articles like this, autoimmune diseases are getting more attention.
The funny thing is that if you have one autoimmune disease, it is highly likely that sooner or later you will also encounter another one. I was diagnosed with three. Including one affecting my eyes pretty badly.
The way I am managing it all is by 1. making sure my adrenal does not deplete(went through a super adrenal crash last year)2. good dental hygiene(it does help massively)3. avoid sugar, refined carbs, alcohol.4. Probiotic natural Yogurt and apple cider vinegar seems to keep my system happy, milk makes things worse.5. B-complex and Vitamin - D on a regular basis.
point 3 and 4 are to ensure that my gut is maintained well.
The important thing is to make sure that you try to take good care of your adrenal gland and pituitary , because they tends to overwork to compensate the malfunctioning organ that is under attack at the time.
The longer I continue my unrefined diet, good sleep cycle and avoid stress, the more my body heals.
I used to be quite a sharp minded computer scientist, now that sharpness comes and goes, other times there is fog, which is not very nice :-) But I have hope that eventually I will be better, I am already much better than last few years :-)
Been down the kumbaya food path, maybe it's what got me this disease (tried some shady immune boosting drinks on and off before getting diagnosed, just for the sake of it).
I am monitoring research and for my specific disease there doesn't seem to be much in the pipeline. But one drug developed for some other auto-immune disorder can manage mine as well. I am putting hope in AI and machine learning, to guide scientists to faster discoveries. And I hope the socio-political climate doesn't escalate to the point of halting research.
They certainly are. It's an interesting area of genetic correlations, the autoimmune cluster: https://en.wikipedia.org/wiki/Genetic_correlation#Disease Also notable for relatively large genetic effects from the large and difficult MHC gene.
But sometimes it screws up. Maybe you encounter something that triggers expansion of lineages with autoimmune potential. Or maybe there's damage that exposes previously hidden antigens to the immune system. There's undoubtedly a genetic component too.
The heated form is also useful, but thr key has been to incluse lots of raw form.
CFS tends to be triggered by stress (as it was in this case), and sufferers tend to lead stressful lives (being an editor of New Yorker is definitely stressful). The things that worked for her -- alternative therapies, living a healthier and less stressful life -- typically help with CFS.
From having CFS myself and recovering, it seems to be caused by a state of persistent burnout caused by chronic stress. It's usually multiple stressors that build up and cause the brain to shut down the energy supply. It's not caused by a moral failing or laziness. I see it as similar to the central governor that limits athletic performance. You have no control over it, other than to change your lifestyle and hope that your brain recognises that there is no longer any chronic negative stress.
The theory is that CFS is caused by long term anxiety which sets off biochemicals such as adrenaline into your heart and bloodstream. These chemicals suppress your immune system and reduce your energy systems.
The treatment aims to switch off the source of the negative chemicals i.e. stop the anxiety. The methods are a mixture of self talk/coaching, posture changes, recognizing and confronting sources of anxiety and setting clear life goals to work towards a place of feeling less anxiety.
The LP was recommended to me by someone who also recovered from CFS. I'd say definitely give it a try if you are a sufferer of CFS, chronic anxiety or similar illness.
After 2 weeks of checking everything in every way possible, they figured I had auto-immune hepatitis (body attacks my liver). That explained why I had been very tired, doctors said I'd take about 6 months before I could return to school, because I would be too tired and might get ill because my immune system would be weaker as I ate medicine limiting my immune system.
After 2 weeks of staying mostly home, eating immuno-repressant, I was bored as hell, went back to school and was back in full form, and I rarely get a cold or anything.
Never had any symptoms other than yellow eyes when it was worst, and extreme tiredness when it was the worst, apart from that it only shows itself when people actually analyze my blood...
No one really knows what the cause is, but there are many theories. It's fun to add some mystery to life, there's still a lot to discover ^^
Data Fountain: https://clients.adaptivebiotech.com/immuneaccess
Just last March we hiked to Patagonia and camped there. We hiked for about 8km to the camp place. She even carried more weight on her back than I did. But this last December when we went snowshoeing in the Canadian Rockies, she had to stop every few minutes from fatigue. I helped carried half of her stuff for her. Not having her, the old ways, has been rather sad. She has an amazing attitude about it, so it's very helpful. But I'm constantly praying and hoping for her to get better.
I have read so many things online and have come out rather frustrated how limiting the Medical science is around treating or fixing autoimmune disease.
The only thing I've found so far is this:
I call it the, "nuclear option". They use chemotherapy to kill your immune system and use stem cells to rebuild it from scratch. There is apparently a hospital in Mexico that does that for less than $100k with some success but no true scientific rigor yet 
Alternatively, there is some research suggests that fasting for 48-72 hours on regular bases has shown to regenerate a new immune system. However, unclear if the new immune system will just attack the antibodies again or it would be blinded to that history. I wish they would do more research on that front 
I also have been thinking about other ways that technology might help sufferers better monitor themselves and ultimately use some data analysis to better understand if there is specific diet changes that can help keep the disease in control.
If you know of other things please comment.
From the start, though, the study of autoimmunity has been characterized by uncertainty and error.
I hate the term and the very concept of an autoimmune disease. I can't wait for the world to conclude this was lazy BS that allowed doctors to gloss over their lack of understanding of what was really going on. I am heartened to see the above passage acknowledging that "at least, that's the premise" (aka the idea or current theory) and the next sentence acknowledging that this idea really isn't on very solid ground.
Back in the day this wasn't the case. There wasn't this barrage of info. You only knew of a handful of cases in your small circle of friends or family. It didn't seem like danger was lurking around every corner. Anyone else feel the same way?
2 years ago I was running my startup Tennis Buddy, and everything was fine when I got pins and needles pain one morning all over my body and blurred vision.
I have had this a lot of times before in my life (around 20 times), and it always went away with rest within 1 week with no problems. However, this time I was in the middle of finishing a contract and didn't take rest. A couple of days later, I reduced work lod to become better, however it seems like it was too late, since the pins and needles pain just didn't go away anymore.
After a month of still not getting much better, I went to see doctors, however they could not find anything. There was the assumption that it was psychological, however, I just don't have any symptoms of depression.
During that time, I noticed that only very little exercise (running for 1 min) increases the pins and needle pain for several days or even weeks. Then for the next year, I tried to not exert myself at all and stopped all work to get better. This did reduce the symptoms, however, just small exertion triggers the symptoms again. After this happening a couple of times, I also got strong muscle twitches (several thousand per minute sometimes) allover my body.
There was also the assumption that I had CFS, however, it does not fit, since I just have no fatigue at all, I only have constant nerve pain/pins and needles, blurred vision and muscle twitches.
I also moved back to my parents and had to leave my friends, so it's getting really difficult for me to stay positive. Please let me know any advice that you might have.
All automimmunities should be amenable to cure this way, as the malfunctioning is based on bad data that is stored in the immune cells, nowhere else. Newly created immune cells do not have this malware; they acquire it from the existing population. (It would be interesting and novel to find an autoimmunity where this is not the case).
Unfortunately, the cost-benefit equation for this current form of ablation doesn't work for things that don't kill patients. No-one undergoes chemotherapy for a condition that merely shortens your life expectancy by a decade and makes you miserable, as chemotherapy has a significant risk of death and shortens your life expectancy by a decade.
So what is needed are better forms of ablation, those with no side-effects. The programmable gene therapy cell killer produced by Oisin Biotechnologies is one possible class of approach, as are other targeted cell killing approaches such as that demonstrated last year to selectively kill blood stem cells.
Then an application of cell therapies is needed, creating immune cells from a patient tissue sample, and infusing them in bulk immediately following ablation, to remove the period of vulnerability.
These are very feasible targets. A company could be founded today, right now, to do this, and have something ready for human trials by 2019. Sadly, here as elsewhere in medicine, there seems to be no hurry to change the world.
Q: "How does PyCharm Edu differ from PyCharm Professional Edition or PyCharm Community Edition?"
A: "PyCharm Edu is based on the Community Edition and comprises all of its functionality. Additionally, it installs and detects Python during installation. It has a simpler UI (adjustable in settings) and adds a new "Educational" project type.
PyCharm Professional Edition additionally supports different web development technologies, has remote development capabilities and additional languages, and supports working with databases."
(not affiliated but I want to support it by bringing it to your awareness)
It's a native IDE, it's fast, doesn't crash, there is a portable version as well. I use it for the last four or five years when developing in Python.
I'm pretty partial to the VS Code and command line approach, myself. It teaches you basic command line stuff, which is good for anyone learning CS in the long run, and it teaches you how to NOT rely on an IDE for most of your syntax and semantics which is enormously helpful for those starting out (even though the learning curve is a bit steep).
Even if you do invoke the interactive mode in PyCharm, each execution creates a separate prompt, causing confusion. I'll admit I'm not the most skilled PyCharm-user. Perhaps there's a way to make it work like IDLE?
The other thing I'd like to see is more intelligent defaults for project location. On Windows, it defaults to the C drive, which is fine except for when students don't know where their documents folder is buried. Putting a git repo in a system folder breaks git because of permissions, but the UI will never tell you that.
If you program python (especially Django), you should give PyCharm a go!
I dug around the site a bit and found the quote above, but didn't find a link to the source. Anyone find one?
What JetBrains needs to do is create a good series of instructional videos to show how they intend these products to be used. They are excellent and very powerful but you are left to peek and poke around to figure out how they intended you to use and configure them. The various videos available, last time I looked, are seriously outdated. For example, there are a bunch of different ways to work with PyCharm and Django.
The other thing they need to do is improve their customer service. The couple of times I needed an answer not found on sites like SO it took something like 3 to 5 days to get an answer from them.