I wish I could use it at work since we (Hootsuite) already use Scala heavily on the backend, but I am reluctant in part because Scala.js does not quite have financial support of Lightbend. Or so it seems, it's a bit hard to tell where Lightbend ends and the non-profit Scala Center begins. The latter did pay to get some features implemented but reading their advisory board minutes, I'm not sure if they would have enough funding to pay for the majority of Scala.js development, which if I understand correctly happens for free as part of a PhD right now (note: my information might be wrong/outdated!)
So, if anyone involved with Scala.js is reading this and has better insights on the situation, it would be nice to know.
But I will keep using it regardless. It's marvellous.
Interesting development. This makes me want to dual-license my software so that violations have more teeth to them.
1. Number of releases per album individually tagged:https://musicbrainz.org/release-group/f5093c06-23e3-404f-aea...
2. The amount of metadata for an album:https://musicbrainz.org/release/b84ee12a-09ef-421b-82de-0441...
When you get used to this kind of high quality metadata, it's just so so sad to see how companies like Spotify treat metadata. As an example, look up Bob Marley & The Wailers on Spotify and try to find original releases, and then compare that to the list found here:
...and the sad part is that the metadata is freely available, with a permissive license.
I lightly modified a version of the Filemon driver from Sysinternals and wrote a little C program that used the driver to monitor for mp3s being played and then grab the perceptual audio hash of the file using trm.exe from Musicbrainz. It then sent the resulting fingerprint off to my website (written in glorious PHP3 no less!) and you could login with an account to see stats on the music you'd been listening to (done with meta data pulled from Musicbrainz).
Surprisingly, it worked reasonably well ...though very sure if I looked at the code now I'd run away screaming.
Really cool to see they're still going strong after all these years!
This makes playlists resistant to filename changes, moves, or even losing all the actual audio tracks and having to buy them again, all because MusicBrainz provides so accurate metadata.
Just to name a few of the other projects, there's AcousticBrainz  collecting acoustic information which may be pretty useful for machine learning, CritiqueBrainz  for collecting user reviews of songs, albums and more, ListenBrainz , an open scrobbling service a group of people including former last.fm employees initially hacked together in a weekend, and finally BookBrainz , which tries to be what MB is but for books.
During the last year the people running MB have worked on getting companies using the data to support the project resulting in a quite impressive list of supporters  including big names like Google, Spotify and the BBC.
MB has also collaborated with our fellow data nerds over at the Internet Archive to create the Cover Art Archive. 
In general the project is run by people who equally love both data and hacking. Feel free to stop by on the IRC channels #musicbrainz and #metabrainz on freenode!
: https://acousticbrainz.org/: https://critiquebrainz.org/: https://listenbrainz.org/: https://bookbrainz.org/: https://metabrainz.org/supporters: https://coverartarchive.org/
You should try out the demo queries linked from that README if you want to get a sense of the depth of information available in their database.
They seem to have everything I throw at them, except for:1) Extremely new releases (on the order of a-few-hours-after-release)2) Some niche songs that haven't been officially released (soundtracks for some Korean television shows)
I was always wanting to know since then if there are other maintained/curated music databases.
I also didn't realize at first that they offer a public API. The Picard client was decent, but I'd be interested in a command-line solution. Does anyone know if this exists?
As I recall, it was pleasant to work with and did what I needed it to quite nicely, aside from a feature that my code had depended on being removed anonymous/unauthenticated search at which point the project was already basically dead and not worth trying to fix (that was just the last nail in the coffin). In any case, nice to see that it's still active.
either way, nice to see it
MusicBrainz is the third project of it's kind. Two previous older projects got bought by the media industry (Sony and Magix). Such a database gets useless if it doesn't receive updates.
First there was CDDB, short for Compact Disc Database, is a database for software applications to look up audio CD (compact disc) information over the Internet. This is performed by a client which calculates a (nearly) unique disc ID and then queries the database. As a result, the client is able to display the artist name, CD title, track list and some additional information. CDDB was invented by Ti Kan around late 1993 as a local database that was delivered with his popular xmcd music player application. CDDB is a licensed trademark of Gracenote. In March 2001, CDDB, now owned by Gracenote, banned all unlicensed applications from accessing their database. As of June 2, 2008, Sony Corp. of America completed acquisition (full ownership) of Gracenote. https://en.wikipedia.org/wiki/CDDB
Then there was freedb. freedb is a database of compact disc track listings, where all the content is under the GNU General Public License. To look up CD information over the Internet, a client program calculates a hash function from the CD table of contents and uses it as a disc ID to query the database. If the disc is in the database, the client is able to retrieve and display the artist, album title, track list and some additional information. It was originally based on the now-proprietary CDDB (Compact Disc DataBase). On October 4, 2006, freedb owner Michael Kaiser announced that Magix had acquired freedb. On June 25, 2007, MusicBrainz a project with similar goals officially released their freedb gateway. The latter allows users to harvest information from the MusicBrainz database rather than freedb. https://en.wikipedia.org/wiki/Freedb
There's FreeDB (http://www.freedb.org) which does roughly the same thing, starting from the old CDDB database before Gracenote, and then Sony, bought it. Their database dump is supposedly available.
- At the base, there is Donald Knuth's program `tex`, itself written in a strange language (WEB) that is essentially an ad-hoc macro-expansion system (not used by many others, and not even by Knuth today, who prefers CWEB), and compiles (via `tangle`) to a dialect/version of Pascal (Pascal-H) for which a compiler hasn't existed for years. [It also "compiles" (via `weave`) to the printed book TeX: The Program]
- Then there is LaTeX, an elaborate set of macros written originally by Leslie Lamport (another Turing award winner) and later by a team, to be interpreted by the TeX program, which was never designed by its original creator for such elaborate programming.
- There are entire new programs (aka TeX engines) like pdfTeX and XeTeX, created by editing the original `tex.web` in different directions.
- There are the binaries of all these programs, compiled using `web2c`, a program written solely for converting all these WEB programs written in (basically) Pascal into C code, which is neither an arbitrary Pascal-to-C converter nor even an arbitrary WEB-to-C converter.
- There is LuaTeX, a manual rewrite of TeX in C, embedding a Lua interpreter and adding many hooks and extensions.
- There are thousands of macro packages written by thousands of people of varying levels of skill and foresight, on top of TeX, LaTeX, and other macro packages themselves: essentially everything on CTAN (which was inspiration for Perl's CPAN, and ultimately many languages' package repositories like Python's PyPI etc.)
And all this without even mentioning ConTEXt, Metafont, MetaPost, BibTeX, Kpathsea, various assorted utilities, graphics drivers
I now get closer to 10+ years of programming experience, yet nothing comes close to debugging a faulty LaTeX custom command... it can quickly turn to an unreadable mess, but you have to admit that once everything is swept under a preamble.tex file, the rest of the code is very clean. Especially with auctex in emacs which displays most math symbols as their true unicode counterpart.
Funny story: one of my first gig was working in a music instruments shop where I was basically the IT guy, from sysadmin to web dev. At some point the software that created the barcode labels stopped working. Now I had to find an automatic way to make those labels, so of course I turned to LaTeX. All I needed to do was to write a batch file calling `pdflatex` with a template tex file and a pdf file for the label was promptly sent to the printer! There is probably some python package for doing the same thing, but I was so proud of seeing Computer Modern font tagged to every instruments in the shop!
All of the CSS/HTML based solutions cost thousands per license, so that's out.
I'm now on to SILE, which fixes a lot of problems with tex. I can only hope that it's advanced enough to properly typeset a novel.
I guess everyone who thought that "signed integers" are cryptographically signed weren't THAT wrong after all :D
By my reading, this allows not a whitelist of pages, but a whitelist of arbitrary addresses. Different granularities entirely. Can anyone else bring a light to bear on this?
It's not the misses you worry about, it's the hits
(I still think it's outrageous that C is still a single-pass language - we shouldn't need separate simultaneous declaration and definitions any more)
I wonder when Microsoft will do any work on the VBA editor. It's not like VBA is going away. Office users still write new VBA every day. They need it.
Popularised (if we can call it popular!) by Charles Simonyi of Microsoft's fame who created the company called Intentional Software that was recently purchased by Microsoft.
There was interesting editor called Isomorf that demonstrates the benefits of a non-text-based editor.
(site is down https://isomorf.io/)(youtube demo https://www.youtube.com/watch?v=awDVuZQQWqQ)
I would really like to see something like this take off.
I firmly believe we can only unlock the next generation of software engineering by breaking free from plaintext. Think about it, how many more ASCII symbols can we mangle together to create meaning and context?
A structural editor takes all of that away. Suddenly syntax becomes a choice just like the colour theme of your editor.
Plaintext programming puts us into a fight with the computers because on one hand we need to keep the syntax parsable and on one hand humans need to read and write it.
It's a huge conflict of interest. You want to provide information to the compiler now the syntax becomes hard and complicated (rough example: Java). You want to keep the syntax human-friendly now the program becomes weak from the compiler's point of view (rough example: Python).
Our editors need to be context aware so they can hide/show relevant information and to encourage the people to provide as much information about the context/domain as possible.
If you look around you see we have been doing a lot of this stuff in the past decades but for some reason we just half-ass it by baking stuff on top of plaintext.
For example embedding documentation or even unit tests (python "doctests") in comment blocks in ad-hoc languages.
Or we embed naming conventions and so on to relate concepts with each other.
For example a "User.js" file and "User.spec.js" file for a test.
If we kept information in a structured manner suddenly so many of our problems would go away.
For example we will get structured version control. No need to have something like git tracking lines in files.
We will get unit testing that is always correctly tied to its relevant components.
We will get documentation that is structurally accurate. The editor could switch between programming and "documentation" mode. But the documentation would be a first-class object of the program not just some text that is shoved into it somewhere.
We will get much smarter re-factoring.
We will get much better compatibility across versions. Because there's no syntax to worry about breaking from a textual perspective. Because the program becomes a semantic tree and older programs can be "transformed" to fix them or make them compatible or something similar.
Because we are text-free the environment can encourage the programmer to provide a lot more information because it can get folded/hidden/etc.
The "units" will all have unique identifiers so confusion in naming and so on will be significantly reduced.
Perhaps you could create and publish modules/units in some central repository then use them in your projects. Kind of like NPM for example but a lot more structured.
So you could import a bunch of "units"/functions from someone else's catalogue.
Because everything could have metadata attached to it you could imagine for example "security advisories" could be attached to certain units such as a function and published.
The environment would know exactly in which places you are calling that exact function and it could alert you to the fact.
You could do semantic find and replace ("show me all sql queries", "show me all untested functions", "show me all functions modified by John Smith since last 14 days", "show me all undocumented functions", etc...).
You could do smarter CI/CD by way of defining rules and constraints on the structure of the program.
Made-up Examples:- If the changeset involves objects tagged with "security" require approval before deploy- If the changeset introduces new SQL queries ping the DBA team- If the changeset introduces more than 1 function without corresponding documentation show warning- If more than 50% of the new objects introduced in the changeset lack corresponding test cases fail the build- You get the idea..
The point is, all the cool stuff we'd like to do depends on us having a lot more structured information and context about our programs and a plaintext environment is not suitable and is hostile towards that.
2. 2fa makes people think they're safe, when they're often not. (ss7 is weak thus sms, etc)
3. There's not really a "secure" email account. The admin can read your mail. There's not really a "secure" phone number. The admin can use your number.
4. This seems ok, if your phone isn't pwned.
5. If you don't hold the keys, you don't own the coins. DO YOUR OWN COLD STORAGE.
1. Customer receives car.
2. Customer pays monthly amount based on the prevailing interest rate and predicted depreciation of the car.
3. Customer returns car after X years, at which time the depreciated value of the car along with the payments they've made pays for the car they received X years earlier.
Can someone explain to me how this is functionally different from a fixed-term lease?
Getting people to sign off on a contract they don't really understand is a great way to get more money out of the consumer than they'd otherwise spend.
My car costs me a set amount per month that I can afford, an amount that is actually equal to the monthly depreciation it would face anyway (on average) over a three year period (including the initial low value deposit I paid, this is still true).
Even if I bought it outright, which I couldn't afford to do, I'd 'lose' the same amount of money _in total_ thanks to that depreciation. Effectively I'm 'renting' the car for the same 'cost' as owning it, just without the upfront payment.
Further, GAP insurance purchased at a one off price of about 100 (which was far more tricky to make sure I got the correct thing) will cover any difference in the 'hand back' price at the end of my term and the value of the final payment. So basically if the car does depreciate below the value of the final payment then the GAP insurance will make up the difference. Likewise if I have an accident etc.
I reject the idea that PCP deals are like 'buying and selling a series of homes using interest-only mortgages'. Cars lose value, always. Whether you buy it outright or not, it'll depreciate in value (save for some hyper rare beasts you aren't going to be buying on PCP anyway). But the article doesn't address this point at all as far as I can see and it's an important part of the value proposition of using these types of deal.
In general people are sitting ducks when it comes to being fleeced by parties with a plan.
I always wonder why schools don't even teach the beginnings of finance to everybody. You'd almost think there is a reason why it explicitly is not being taught.
In addition when you consider the prius' high gas mileage, low maintenance, high lifetime (over 500k miles) and high resale value it's a great deal if you're planning on driving quite a bit.
Dangerous equipment should have warnings so that you don't lose your fingers and financial tools should have warnings so that you don't lose your shirt.
A decade ago I bought a Fiat (in Brazil) and was offered financing at 0,99% a month. This was worth it, as fixed income investments were paying more than 1% a month. Except that the administrative fees made the effective rate something like 1,99% (which was not worth it). The salesperson argued that I could pay the fees in installments too. It made me angry that they are allowed to do this to people who can't do the maths.
And if you agree with that, then let me tell you a story about the healthcare industry...
The public could reasonably assume that the government might provide guarantees for products in the first group, in the way they do for bank accounts. Products in the second group would explicitly be excluded from any government guarantee - if you passed the exam, and want to risk your own money, totally up to you, but don't come expecting your fellow citizens to bail you out if things go terribly wrong later.
So for example, interest-only home mortgages are almost always a bad choice for most consumers, so they would probably be in the second group. So you could still get them if you really wanted one, but you would have to prove you knew what you were doing, and were willing to give up any hope of a government bailout.
This is simple. Cars (especially new cars) depreciate super-fast; and houses have appreciated at crazy rates at least for the last few decades. Don't put your money in a fancy car.
That would put a lot of pressure on the industry indirectly to be more transparent to the average consumer. I would think anyway. I don't have anything except anecdotal evidence and a gut feeling to back this up.
I think car ownership being needed outside the major 12 cities (Lookin' at you NYC!) that have good public transit is a national crime in and of itself.
This story is UK based, is there no Truth In Lending type act to help simplify these contracts into terms people can readily understand? A recent car purchase I made in Georgia (US) was very easy to understand, all the numbers on one sheet.
Also: he was performing an important role in Ghana delivering medicines, and by migrating, now that sector has lost an important person. I wish he would go back with his newfound knowledge and help make things better in Ghana, where he is needed much more than in the US.
Stories like these really make you put things into perspective. Here I am complaining about how hard my professors are in a crap university while this guy comes from the depths of poverty and works his way up through one of the best public health programs in the world. My daily problems are nothing compared to the obstacles this guy had to overcome.
Kudos to that man.
"My uncles from my father's side took all his properties, per the custom in my village in Ghana" - that looks like a serious problem and needs some analysis but I think it is a real problem.
I wish you all the best.
On one hand, it's a heartwarming story of a man breaking through barriers to achieve success from desperate beginnings. Kind of an American dream story, really.
But if everyone with those qualities leaves the community, the rest of the people are helpless and will be mired in poverty forever.
It's difficult, morally, to balance the benefit gained by the guy who took the action, compared to the loss his community suffered by his departure.
In any case, I think this sort of thing is a significant reason why countries don't develop. Change always starts from a small group of changemakers - if those people all just emigrate, nothing will ever change.
Migration since the 60's has become a giant IQ-sorting system and that's having huge consequences on all sides. And this never seems to get discussed, oddly.
Also saving 10% of the v8 code size is enormous.
The right path forwards is for JITs to emit SIMD code when possible, and for JS engines to provide OpenCL-like GPGPU-targeted APIs, following the trail that WebGL blazed with JS shaders.
The bug links to the actual discussion ("The V8 binary has a considerable amount of code that could be trimmed"): https://bugs.chromium.org/p/v8/issues/detail?id=5948
There's a nice treemap of the size of pieces of v8 in the first comment on that bug.
Comment #3 on the bug notes that SIMD.js is large: https://bugs.chromium.org/p/v8/issues/detail?id=5948#c3
Finally, comment #6 "There's no reason to keep the simd.js stuff around. I'll take it out ASAP.": https://bugs.chromium.org/p/v8/issues/detail?id=5948#c6
Someone notes that v8 is reduced by 500KB on Android.Looks like it got reverted a few times for breaking Node.js tests, someone notes "can you wait a day until we sort out Node first?" then SIMD.js gets removed again the next day. So I assume they "sorted it out w/ Node" w/e that means?
Also, trying to learn how the macroassembler is laid out; seems pretty neat. Kinda baffled that s390 and ppc are supported. Was kinda for mips but I guess that's an Android supported platform...
A lot of the code that looks like it's being assembled by the macro assembler has double underscore followed by a space, followed by argument lists, which is curious looking at first glance.
Looks like the double underscores are defined as:
#define __ ACCESS_MASM(masm)
System is overclocked (firstname.lastname@example.org) and has been up and 100% solid for weeks now. 3.85 actually worked and tried to stress it by compiling a bunch of stuff. Didn't have any segfaults or other issues. Worked great.
Only after using an artificial stress tool (stress-ng) did I finally decide 3.85 was not 100% stable at stock volts. Backed off to 3.8 to avoid voltage increase for now. Haven't rebooted since.
The issues being reported do seem legitimate, however. Not sure if it's the memory controller having trouble with certain DDR4, the motherboards, or errata within the Ryzen CPU itself. All seem plausible. Hopefully AMD finds a resolution. In the meantime I'm glad I'm not affected.
See comment from inuwashidesu in this thread:https://www.reddit.com/r/programming/comments/6f08mb/compili...
Edit: and reading the comments in that thread it would be great if people would remark if they're running stock clocks and if they have upgraded their BIOS.
Might be affecting only a subset of users based on silicon?
So, still to be ruled out is a bug in GCC itself?
Initially this question was going to be "can we log executed instructions" but I rapidly realized that not even DDR5 could keep up with such a logging system - it would slow things down too much and likely mask the bug (not to mention the TBs of space that would be needed).
Rethinking a bit, my 2nd take is to see if it's possible to somehow repeatedly synthesize workloads from (presumably smaller, more manageable amounts of) seed data.
One of the users in the AMD forum thread (I don't seem to be able to get a permalink) mentions that they're experiencing gcc crashes on Ubuntu inside VMWare on Win10! This means that the bug fits inside two kernels' preemptation/task scheduling and a hypervisor! Interesting.
What stumps me is that some users are experiencing gcc segfaults, while others are getting faults in `sh`.
...yeah this has me stumped. CPUs are so fast, and we have no idea where the problem is.
EDIT: This comment is interesting: https://www.reddit.com/r/programming/comments/6f08mb/compili...
The default of 128mb is plain stupid. I get why Amazon chose it, because it directly eats 2x the value of your backing store - something that can be hard to explain to customers with 4gb disks attached and not really running any appreciable load through it.
But when you have 100+ gig disks allocated on a 2xlarge instance, the small value makes no sense whatsoever.
In the article: "and collects its Amazon EC2 instance type and current configuration"
... and I switched off.
I recently diagnosed a MySQL latency snag on a well known cloudy platform for a customer. I run rather a lot of comparative bonnie++, MySQL bench and Lord knows what else. I was able to convince the customer that my office PC ran MariaDB better simply because my single SSD on a rather shag Lenovo PC (a cast off from another customer!) had better i/o and latency than whatever they were being given by said cloudy provider.
I suggest you start with the basics: CPU, RAM, disc I/O and latency, network I/O and latency. Optimise those first and then work up the stack (and down, then back up etc.)
If you start with "assume a spherical EC instance" you may not be considering the whole problem -> solution -> realisation thing.
I had the pleasure to work with IoT and the Lora protocol writing and managing a Lora network server, the piece of software that takes Lora messages send through radio and decode them from radio -> udp -> MQTT (encrypted) -> MQTT (clear)[shameless plug, I packed my experience in an on-premises service: http://loranetworkserver.com/]
The difficult part I believe is this one, after you got the MQTT message in clear what you do with it is quite simple, straightforward and overall a solved problem (unless you are not getting millions of messages per seconds, but very few businesses have such load).
From what I understood from the google page they are solving the simpler problem of getting the data from an MQTT message and save it to disk, and maybe use later for data mining.It is weird because I wouldn't accept to have my data saved in someone else disk, at the bare minimum I would require to have the data duplicated in some standard (pg, MySQL, mongo) db (even a db managed by AWS or Google, but a standard one that I can move at will).
It doesn't seems the most valuable piece of architecture that they could develop.
It might be a useful service for B2C devices though.
The reason the driver subsystem is architected as pluggable modules ("drivers") is to support the extremely wide array of organizations that have to build into it.
The reason why Linux is broken down into subsystems is to support the "specialists" who work in only on system at a time.
The reason Linux is a monolithic kernel that has a large degree of complication internally (vs. a microkernel) is because Linus is strong enough to make it happen.
I mean, the logical error is right in the title. The author inverted cause and effect.
The graph that would ultimately support the point of the article would have the difference between a simulation of a uniform distribution of contributions by the authors, and have a full 0-100% axis for scale, as opposed to the 35-65% presented in this article.
That is to say that subsytems were defined solely based on technical considerations, which is how it should be if the goal is sound engineering.
Not sure what to make of the ratio between "specialists" and "generalists". A comparison to ratios from other projects would provide some helpful context.
I believe the answer is, yes, it would. While Linus is a stubborn and opinionated leader ("Benevolent Dictator For Life") it is those qualities, coupled with his extremely high standards, that have preserved the coherence of Linux's system architecture all this time.
We do this at work. It mostly works, modulo "Distributed Systems Are Hard".
But in source control, author is typically defined as the first contributor to a file, which doesn't always reflect the person who contributed the most content to the file.
The very separation that the article draws "core" vs "drivers" is actually highly representative of how the Linux community is structured. Most of the core work (including the driver subsystem's backbone) is done by long-term contributors who actually work on the Linux kernel full time. Most drivers actually come from occasional contributors.
Driver contributions are "specialized" for the same reasons why they're specialized on pretty much any non-hobby operating system, namely:
1. The expertise required to write complex drivers mainly exists within the organization that sells the hardware. Needless to say, these people are largely paid -- by the hardware manufacturers! -- to pay drivers, not contribute to what the article calls core subsystems. There are exceptions ("trivial" devices, such as simple EEPROMs in drivers/misc, are written by people outside the organizations that sold them), but otherwise drivers are mostly one-organization shows. In fact, for some hardware devices, "generalists" don't even have access to the sort of documentation required to write the drivers in the first place. (sauce: wrote Linux drivers for devices that you and me can't get datasheets for. $manufacturer doesn't even bother to talk to you if you aren't Really Big (TM))
2. Furthermore, there really are subsystems in the kernel that are largely a one-company show and are very obvious examples of Conway's law. IIO drivers, for instance, while started by Jonathan Cameron who, IIRC, is really an independent developer, are largely Intel' and Analog Devices' -- to such a degree that, even though they follow the same coding conventions, if you've worked there enough, you can tell who wrote a given snippet. Same goes for most of the graphics drivers. Most of Infiniband used to be IBM, I think. If you dig down in the drivers subsystems, you'll see even funnier examples (my favourite example are ChipIdea USB controllers; a few years ago, support for USB slave mode on some Broadcom SoCs broke down because Freescale pretty much took over de facto ownership of the drivers, and some of their changesets worked fine on their ARM cores, but broke on Broadcom's funky MIPS-based cores)
Also, this is very weird to me:
> Adherence to Conway's Lay (sic!) is often mentioned as one of the benefits of microservices architecture.
Back in My Day (TM), adherence to Conway's Law was usually considered a negative trait, summarized by the mantra that, in the absence of proper technical leadership, an organization of N teams tasked with writing a compiler is going to produce an N-pass compiler.
Of course, this is a most negative example, but are we really, seriously considering that adherence to Conway's law is a positive thing today? That it's actually a good idea for the architecture of a software system to reflect the "architecture" of the team that created it, rather than, you know, the architecture that's actually best for what it's meant to do?
"Another sub-problem of the wider StarCraft (Blizzard Entertainment, 1998) playing problem is build order planning. The problem here is in which order to build certain improvements to the players base and in which order to research certain technology, a complex planning problem at a considerably higher level of abstraction than micro-battles. Here, Weber et al have data-mined logs of existing StarCraft (Blizzard Entertainment, 1998) matches to find successful build orders that can be applied in games played by agents."
I don't know if a better way to do it, but really makes it tough to jump in and commit to giving a read through.
I love Lisp. I had a great time learning it, but its lack of momentum is a big problem.
Due to the slow speed (100 voicemail/min) its likely what they are doing is initiating a call far enough for the phone system to busy out your phone, then very quickly placing a second call that the phone system sends straight to voicemail as your phone is busy. Once this call starts to hit voicemail, the original call is dropped.
I doubt phone companies let people directly send voicemail to people as they wouldn't be billing for that.
Absurd. The right to free speech does not include forcing others to listen to you.
I just counted. It's four times per day, like clockwork. Then there are various other robocalls that come in every couple days. So sometimes it's 4 per day, sometimes it's up to 7.
It's mildly annoying, but is there any other option than to just ignore it or keep blocking the endless new numbers that pop up?
What's the point of a death penalty if not for this?
I'm not sure anybody actually listens to their messages any more and so I rarely leave them. After all, if it's important, why would you call at all?
Not much, say 25 cents or around there. If it's important that's a negligible cost.
The proceeding code is 02-278.
I am reminded of this, where the goal is to waste to telemarketers time as much as possible, and my thought is enough people used it, it would make spam calling not worth it.
I would gladly pay for a program in which I could load that up on my phone and have it distract/annoy them back.
Now I just have to worry about people I know, phone interviews, and debt collectors (who seem to be harassing me less often via phone these days).
There's no limit to how low pro-business thinking can go.
The only responsible marketing is a nose around your fucking head. If I want your services I will contact you. Got it?
A data validation framework is not a toy project.
"JSON support for named tuples, datetime and other objects, preventing ambiguity via type annotations"
If you are interested, please have a look at the first unit tests to see how it works:
Note that the tests currently use the "ugly" NamedTuple syntax to be compatible with Python 3.5 and 2.7.
Does/will Pydantic handle all the standard dunder fields like __eq__, __lt__, __hash__, __cmp__ and faux-immutability like namedtuple and attrs do?
Navigation can make good use of icons. Left, right, up, down, start, stop, these kinds of things can be learned and used widely. Somethings like text manipulation icons, cursors, insertions, selections, can be used effectively, but can be surprisingly hard to explain or even describe. Icons for operations can be really tough. Right now as I look at my computer I can see a library of arcane and archaic imagery. Telephones, disks, pen nibs, VCR controls (navigation, sort of), little boxes with arrows, little boxes overlapping, deadbolt locks, paper airplanes, file folders, fluffy clouds, paperclips, and of course little hamburgers.
These can certainly be useful clues, but they also can be very confusing. I've seen paint brushes used to indicate a paint brush in a paint program, but I've also seen the exact same icon used to indicate a screen refresh. Now I'm at as much a loss as anyone to come up with a good substitute, although I will note that I can't imagine any _good_ circumstances when a user needs to be in charge of refreshing the screen.
These days I'm using a lot of 3D software, and the user interfaces are a crust of complicated and indecipherable icons. And that's just the top layer of the UI. Almost all of them rely on a text/label system and hot keys for doing much of the work. Discoverability is essentially zero and the only way to get good is to learn the words and the alphabet. Pros end up _hiding_ most of the UI.
Sometimes a word is worth a thousand pictures.
The only one of the icons in the link that make much sense is the one with kcal on it. Unless I dealt, in detail, with these everyday, I'd never remember what the rest stood for. Something like this may make a lot of sense for the people that produce the labels, but I'm unconvinced it does anything good for those who need to _read_ the labels. Word labels in the intended reader's language is almost certainly the best way to go for actual use.
For example, for proteins: the word "protein" is understandable in English, French, Spanish, Italian, Portuguese, Greek and Swedish. The Russian "belok" should be understandable to Polish and Russian speakers, and the Dutch "eiwitten" should be understandable to Dutch and German speakers. Thus, writing "protein/belok/eiwitten" would be understood by almost everyone. "kcal" is even simpler and should be understood by almost everyone everywhere - maybe make it "kcal/calories" for more readability. "salt/sol/" should cover everybody as well. Perhaps throw in the Cyrillic as well if McDonalds expects a large amount of uneducated Russians in their restaurants.
You could even put the native language in front to prevent any insult to cultures. Am I crazy, or might this actually work?
In the end, it was determined that there was no guarantee that a re-established civilization could grasp what we were trying to say, and that perhaps just an area earning a reputation as cursed, via attribution of visitors, would be the best deterrent.
Therefore, I would submit: "These are not foods of honor. No highly held nutritional facts are described here. Nothing valued is here."
I find this amusing since the first icon is "kcal"... The information is perhaps too simple, as none of these really lets me know what they mean. Without a legend, I wouldn't know what they mean, and if your icons need a legend, then they really aren't doing their job effectively.
Protein is the bottom of a stack -- implies important or less important depending on your personal view point on stacksFat is a scale, implying an association with weight -- a negative link.Carbs are a gas gauge implying energy -- a positive link.
At this point, no, I don't know what the symbols mean without a legend, but I'm sure at some point in time, not everyone knew that a red octagon on a post means stop.
"language-free" by constructing...a language. They even provide a translation dictionary!
Two character combinations for some of the McDonald's symbology can be kind of complicated, though:
dnbi = protein
zhfng = fat
tngli = carbohydrates
rling = calories
Alphabets solved that problem millennia ago.
If they allowed a greater quantity of icons that work better for each region, why wouldn't this simply be better?
That said, I am trying to understand if there is enough value to go around when the network is smaller than (n) nodes? What if the application is not one that will benefit greatly from network effect, doesn't have need for the security/auth model and doesn't need massive compute/resources? What if it will never grow beyond a certain number of nodes (for whatever reason)? Note: those are NOT my opinions expressed in question form...they are pure questions that I want to brainstorm and hope that the responses are along the lines of "here's how those types of apps can benefit".
Also, at mass adoption levels (while understanding we are nowhere near that, but for the sake of the thought experiment), do we end up with millions of micro-networks, rather than the relatively small number of networks we have today? If so, does the crypto token model still hold up? My gut is it would for the infrastructure providers because they can support (n) networks. I am not sure about the rest of the ecosystem or what constructs need to be built/added if that model is to thrive?
I have a bet on for 1000 with a futurist friend that he say with 10 years half of humanity will have made a transaction on a blockchain and I say not.
My only regret at this stage is that I didn't make the bet in Bitcoin...
1. What about blockchain length? The article kind of alludes to this, but there seems to be this "we'll deal with that problem later" idea, even though it seems critical. The answer I always got that chains would fork or be stored distributively but then that suggested the primary use would be in small networks, or that there would be critical problems to solve sooner rather than later.
2. Isn't a guaranteed decrease in monetary supply a problem? I was kind of under the impression that ideally a currency experiences a small amount of increase monetary supply, to avoid things becoming prohibitively expensive. The process of generating coin seems kind of backward to me in many ways, although I'm not an expert in the area.
That's a huge thing, but in the long run, I think that will be the only thing.
Bitcoin concerns me. If/when BTC is makes its appearance in the every day lives of ordinary people, its anonymity value will have eroded significantly. Traditional currencies have not yet started to compete with BTC, but they can and they will if necessary. Try getting a mortgage, car loan, business loan with BTC as collateral as one example of where my concerns rest. Look at the grossly inverted price of BTC and gold prices (artificially assuming 1BTC = 1Oz).
Before BTC there was growing dissatisfaction with money center currencies that persists today. BTC 'took the edge off' for many in those circles and may have relieved pressure on gold prices.
I don't necessarily believe this, but I've read in economic revisionist circles that BTC would be means for certain central banks to redirect some demand and attention for precious metals away from their vaults and toward an asset class they, better than anyone, are capable of mining with their existing computing infrastructure. So, by invention or acquiescence, BTC serves money center interests, for now, but not indefinitely.
BTC remains a highly speculative and risky asset/network in my mind.
Traditionally say I have a php/mysql site, that's on a server, say Digital Ocean and files are uploaded to Amazon storage.
How does that translate to Etheruem? What about private messages? If everything is public on the block chain, isn't that an issue? Does Etheruem run code?
Can anyone point me to some reading on this? And how to create a 'decentralized' social network?
This is the future and I'd like to get a handle on it, thanks!
Does anyone have a more formal explanation of how blockchain is being used here?
the main problem, i then see is that if these tokens are required to use the platform, wouldnt their cost be prohibitive? and the platform wont be useful?
these tokens are going up in value just like bitcoin which always had actual use (mostly on blackmarket) so their incentives are aligned with the token being used as a currency, but not as an app token that is required for the product.
so in effect, these are just over hyped quasi securities offerings and trading.
What i'm interested in is the ability of token networks to be useful for legitimate transactions between entities. I think in particular there's potential for token networks to increase the trust and liquidity of virtual goods. Right now cryptocoins are basically only useful for traditional transactions that cash is already very efficient at. What I want to see more of is using the logging and trust ideas of token network to develop transactions between virtual goods, that normally exist in siloed ecosystems.
I recently got a 1080p projector for home use, so now movies / TV series in my home are viewed on a 100" screen. Content is mostly from Netflix and Amazon Prime Video.
Netflix does a really good job with encoding. I cannot say the same for Amazon Prime Video; even with their exclusive (in UK) offerings, like American Gods or Mr Robot, the quality of the encode is quite poor when viewed on a big screen. Banding, shimmering blocky artifacts on subtle gradients, insufficient bit budget for dimly lit scenes - once you become aware of the issues, it becomes really distracting.
OTOH a really big screen is a fantastic ad for high quality high bitrate content. Anything less than 2GB/hour is noticeably poor.
When did we decide that 24/25/30fps was good enough? Now we have a Blu-Ray standard that cannot handle greater than 30fps, and media corporations that are unwilling to release content via any other medium.
Put that together with ever-increasing resolutions, and the amount of pixels something moves across from one frame to the next becomes greater, and video looks more and more choppy.
Franky, this is a much bigger problem than NTSC ever was. Even with content (The Hobbit, Billy Lynn's Halftime Walk) being created at higher framerates, users have no way to get the content outside of a specialized theater because the Blu-Ray standard cannot handle it, and because people seem to honestly believe that higher framerates look bad.
I suppose we can only hope that creators take better advantage of digital mediums that do not have such moronic, and frankly harmful, arbitrary limitations.
This is a very well written introduction
Machine learning becoming more and more popular will probably help :)
When I started as a self-taught programmer I was basically fighting the code all the time and reading about these better practices and all. I followed a top-to-down learning focusing first on the deliverables. As I started learning more and more and constantly struggling with my own code, the ideal implementation (of course involving OOP) seemed further and further far off, always elusive. You could always add an extra layer, an extra abstraction here and there.
At some point I realized I was lost in an abstraction sea and not getting anything done. I abandoned some of the projects at mid-size and made https://picnicss.com/ as an example of the opposite. Oh boy, what a difference. A single stylesheet made in few days with great adoption and feedback from the community. I just saw a video of Google IO 2017 and they used it there in a demo! It also helped that I switched from PHP to Node.js around that time.
So I kind of got hooked to that. I have made quite a few tiny-sized, one-off projects ( https://github.com/franciscop/ ) and learned a lot about this quick-iteration coding. I wouldn't say it's the same as the OP's description of nihilist since that seems to be based on a large codebase. What I made was nihilist in the sense that some projects superseeded the previous ones. Example: first I made https://umbrellajs.com/ , then decided to re-implement it all and created https://superdom.site/ with an alternative syntax.
Now I think I've found a balance in the middle as I'm finishing https://serverjs.io/ ; the library's public API should be fairly stable, but the implementation details can have their shortcomings and be kind of messy at places. To finish off with a great saying for the situation:
Perfect is the enemy of good.
Sometimes, management dooms a project and as a programmer there is only so much you can do to make it forward. Interestingly, poor technical quality does not prevent a project from being commercially successful.
Other times, especially at the start of a project, or as a project manager, you have the ability of making it right. Do not pass on it! Have some realistic expectations about how far people will go with recommendations and best practices, but recognize opportunities to improve a design, as they are rare and valuable.
tl;dr: be the nihilist 95% of the time, but do not miss the optimist's opportunity that will happen 5% of the time!
Isn't that a good way to approach software development? You probably shouldn't make decisive decisions, you probably shouldn't freeze your project, and if you define it you risk missing opportunities outside that definition.
I don't think there's a deductive chain from those axioms to that choice of action. If you're a "nihilist programmer" (not a fan of this usage of nihilist/optimist) in this sense, there's nothing in your ethos that says it can't be better. Sure, it will always be crap, but you can make it better crap. You can do that in small bits or big bits.
The analogy I use for legacy production software is that it's a tire fire, one should endeavor not to make it worse (by unnecessarily adding more tires or fanning the flames), and if possible make it better (spray water on the thing), but it's never going to be an Eden even if you managed to put out the fire since you've still got a stack of tires that could reignite at any moment. If you start out with an Eden in the beginning, maybe you can preserve that, but even if it's ever been done it's not done most of the time.
The best is the enemy of the good, but the good is the enemy of the better. There are few things more irritating to someone with a "nihilistic programmer" mindset as seeing some self-satisfied "optimistic programmer" with their Good crap, that's still crap and could be made better. Not seeing any chance or value of improvement from good is the inverse to seeing no chance or value of improvement from broken, the problem with either view is that doing better isn't tried.
The "optimist" described here is a person that sees software in a more broader sense: functional AND non-functional requirements. Non-functional requirements include maintainability, scalability, performance, security, configuration, etc. and will strive to implement them.
Now, my opinion:
My name for the "nihilist" programmer are "feature fairies" or "duct tape programmers". The problem with feature fairies are that they create more problems than they solve, and never volunteer to fix them.
Feature fairies like to get credited with completed features, but never with the defects associated with their contributions. Therefore they will usually play dumb when a bug happens, or an incident is declared, and make someone else clean up after them while they implement the next feature.
So after a couple of years, you have someone credited with a lot of features, and a team of people that have been cleaning up after such person. The duct tape fairy is now a 10xer, a rockstar whose time is very valuable therefore needs to be paid more, even promoted, even though this person is responsible for wasting 90% of the engineering payroll in fixing trivially avoidable defects.
The way to prevent that is to leave a trail of evidence that can link commits to bugs. When an incident happens, make sure to identify the commit id causing the problem and put it directly in the ticket. Make it very clear where the defects are coming from and who they're coming from.
Never volunteer to clean up after a feature fairy. By the time you do this, the feature fairy marked their task as complete and from the eyes of management you would be wasting your time working on a completed task. Rather than doing that:
- When the feature fairy wants to take on the next task, ask if they have fixed one of the defects caused by them as per evidenced in the commit id.
- Rather than opening a new ticket, reopen the original ticket. This better reflects the situation: you are completing the work the duct taper failed to complete, you are not adding new work. This also denies the duct taper of their prized fake task completion.
When a feature fairy volunteers to be in a hiring committee, prevent it at all costs. They last thing you need is having to clean up after more people.
Be careful while doing this to not be perceived as negative.
I've rented cars myself via Turo a couple times in the past few years. Two of the guys were operating their own small rental businesses (5+ cars, parked around on local streets). At a couple hundred bucks profit per vehicle per month, depending on utilization, having a few cars like that seems the only way to make it worth the while. As a part time job of sorts.
In the East-Bay, AAA is making a massive investment in car-sharing with GiG. I wonder if it will work out financially given state of the world with cheap ride-sharing by Uber/Lyft.
Second, idle cars have a lot of value to owners. The car in my garage right now is giving me optionality and convenience, and that is worth a lot. In fact, I may exercise that optionality after posting this comment!
I wanted to buy a new Jeep Wrangler Unlimited, but as you may know those are enthusiast vehicles and not without their quirks, so I wanted to live with it awhile and really be sure that was what I wanted. Also I was going elk hunting in Idaho for 2 weeks and I didn't own a car so I thought, how perfect?
The trip and the Jeep were great, but it got me thinking about the economics of the whole thing. I noticed that my "host" I was renting from had about a dozen identical Jeep Wranglers on Turo, not multiply listed but completely different vehicles. I realized that the "Uber landlord" model was already taking hold here. I did the math - Wranglers are some of the lowest depreciating vehicles right now even with heavy mileage, and even with 2-3 days of rental a month the owner would likely break even at the end of the loan. Interesting business.
Other than that, it seems weird from the car owner's perspective.
If your attention slips in an airbnb and drop a mug and the handle breaks off, well, that's a few dollars and an apology to the owner.
If your attention slips in a car and get in a fender bender and even if no one is injured that's a mess with insurance and at least several hundred dollars of repair.
So that was the experience for me as a customer. With airbnb it's completely different. I feel I often get more from the things I want (e.g. kitchen access is very common in airbnb's, no Airbnb ever tried to charge me for wifi) and the prices are often massively cheaper than hotels.
They have been a much easier option than a traditional rental car company.
They meet you at the airport.
You can rent hybrids.
You deal with a consumer who owns the car not a customer service rep who could care less about their job.
My complaints are that the app and website have terrible user experiences. Terrible is an understatement.
They were cavalier with my drivers license data going so far as to email it around to vehicle owners as an image. Though I emailed their CEO and he stopped that years ago.
Still, I'm a big fan of Turo.
GetAround is my second choice. They have much more restrictive mileage limits so I stay away.
One only has to not pay attention for 30 seconds at the wrong time to do the same to an auto.
Does anyone know how successful these companies are? What kind of volume do businesses like Turo see?
I constantly see places where an idea from the book is relevant and I want to make people read a chapter of it. Examples include insights into evolution, artificial intelligence, morality, and philosophy. There's a short section on how people tend to argue about the definitions of words and how unproductive this is, that I always find relevant. There's a lot of discussion on various human biases and how they affect our thinking. My favorite is hindsight bias, where people overestimate how obvious events were after they know the outcome. Or the planning fallacy, which explains why so many big projects fail or go over budget.
The author's writing style is somewhat polarizing. Some people love it and some people hate it, with fewer in between. He definitely has a lot of controversial ideas. Although in the 10 years since he started writing, a lot of his controversial opinions on AI have gone mainstream and become a lot more accepted than they were back then.
On Writing by Stephen King. This a biography masquerading as a book on writing advice... Or its the other way around. Whichever it is, I think it's a great book for any aspiring writer to read. King explains the basics on how to get started, how to persevere and through his experiences, how not to handle success. Full of honesty and simple, effective advice.
Chasing the Scream by Johann Hari. Most people agree that the War on Drugs is lost and has been lost for decades now. But why did we fight it in the first place? Why do some continue to believe it's the correct approach? How has it distorted outcomes in society and how can we recognise and prevent such grotesque policies in the future? This book offers some of those answers.
Only if you're Indian - India After Gandhi by Ramachandra Guha. Sadly almost every Indian I've met isn't well informed about anything that happened in India after 1947, the year India became independent. History stops there because that's the final page of high school history textbooks. An uninformed electorate leads to uninformed policy, like "encouraging" the use of a single language throughout the country. If I were dictator, I'd require every Indian to read this book.
Karen Armstrong's A Short History of Myth is a very nice guide into mythology and what that and religion are. It's like a vaccine for any sort of fundamentalism or bigotry, if read with some accompanying knowledge of mythological traditions.
"How to Win Friends and Influence People" by Dale Carnegie, because it changed my understanding of people for the better.
"Surely You're Joking, Mr. Feynman!" by Richard Feynman, because it gave me a model for how to enjoy life.
"Models" by Mark Manson, because it helped shape my understanding of heterosexual relationships.
"An Introduction to General Systems Thinking" by Gerald Weinberg, because it illuminates the general laws underlying all systems.
"Stranger in a Strange Land" by Robert A Heinlein, because it showed me a philosophy and "spirituality", for lack of a better word, that I could agree with.
"The Fountainhead" and "Atlas Shrugged" by Ayn Rand, because they showed me how human systems break, and they provided human models for how to see and live in, through, and past those broken systems.
"Harry Potter and the Methods of Rationality" by Eliezer Yudkowsky, because it set the bar (high) for all future fiction, especially when it comes to the insightful portrayal of the struggle between good and evil.
You hear 'ancient wisdom' on how to lead the good life all the time. These ancient aphorisms came from a time before the scientific method and the idea of testing your hypotheses. Tradition has acted a sort of pre-conscious filter on the advice we get, so we can expect it to hold some value. But now, we can do better.
Haidt is a psychologist who read a large collection of the ancient texts of Western and Eastern religion and philosophy, highlighting all the 'psychological' statements. He organized a list of 'happiness hypotheses' from the ancients and then looked at the modern scientific literature to see if they hold water.
What he finds is they were often partially right, but that we know more. By the end of the book, you have some concrete suggestions on how to lead a happier life and you'll know to the studies that will convince you they work.
Haidt writes with that pop science long windedness that these books always have. Within that structure, he's an entertaining writer so I didn't mind.
I have developed several habits:
a. Writing a Gratitude Journal
b. Going to Gym in the morning
c. Programming in the morning
d. Reading in the morning
I copied some of my highlights here:
Germs guns and steel. Jared Diamond
Influence, the psychology of persuasion. Cialdini
Justice: what's the right thing to do. Sandel
All of Feynman lectures on physics
The hard thing about hard things. Horowitz
Al muqqadimah. Ibn khaldun
I love all the answers in here but please, please answer with more than just a title! I want to know why I should care about a book -- sell it to me, don't just throw it out there and ask me to do the work.
Ctrl+F these names in this page for rationale.
Is there an "awesome books" repo on Github? I wonder.
You will become a pessimist for a while after reading this, just because it feels like there's no meaning in all this since everything repeats itself and nothing is forever, but when you recover from it you'll find yourself much more insightful about the industry and can make better decisions.
I did read it fairly early and it had an quite an impact on my life and thinking. It put into words a lot of my discomfort with a life focused on materialistic success. And it was inspiring seeing an intelectual combining so many of the thoughts and topics he developed during his lifetime into one coherent and approachable book.
I found it by working my way through the list of joint nebula and hugo award winners (which is a really fun project, because all of them are amazing books). It is my favorite sci-fi book. It changes the way you look at gender, especially if you haven't questioned the concept much before.
No more Mr. Nice Guy -- Robert Glover
If my younger self had read this, I think my course of life would be very much different than it is right now. Just a caution that it might come off as misogynistic ramblings for some readers.
The origin of consciousness in the breakdown of the bicameral mind, / Julian Jaynes. Hard to tell if crazy or genius, but well worth a read. Read at 38, wish I had read this at 20 or so. Most of us take our inner voice for granted, but should we really? And what if there was evidence supporting the idea that there's another inner voice, but our modern upbringing suppresses it (but it does reappear with some illnesses, under duress, etc)?
Different Seasons / Stephen King. A collection of four stories, NOT your usuall King horror genre; one of which became the movie "Stand By Me". another became "The Shawshank Redemption", the third became "An Apt Pupil", and the fourth will likely never become a movie. All are excellent. I actually read it at 16, which was the right time, but I'll list it here anyway; if you've seen the movies and liked them, it's worth reading - the stories are (a) much more detailed than the movies, in a good way, and (b) related in small ways that make them into a bigger whole than the individual stories.
Management (software/hardware oriented):
Peopleware / Demarco & Lister - read after I was already managing dozens of people. Wish I had read it long before. This book is basically a list of observations (with some supporting evidence and conclusion) about what works and what doesn't when running a software team. Well written, and insightful.
The mythical man month / Fred Brooks - wish I had read this before first working in a team larger than 2 people. Written ages ago, just as true today; A tour-de-force of the idea that "man month" is a unit of cost, not a unit of productivity.
"Science et Mthode" (Henri Poincar, 1908)
"The Conquest of Happiness" (Bertrand Russell, 1930)
"The Revolt of the Masses" (Jos Ortega y Gasset, 1930)
"Brave New World" (Aldous Huxley, 1932)
"Reason" (Isaac Asimov, 1941, short story)
"Animal Farm" (George Orwell, 1945)
"Nineteen Eighty-Four" (George Orwell, 1949)
"Starship Troopers" (Robert A. Heinlein, 1959)
"The Gods Themselves" (Isaac Asimov, 1972)
"Time Enough for Love" (Robert A. Heinlein, 1973)
It's about tidying up, but also about making your living space harmonious without clutter. It's not one of those get a box and put your pencils in it and then label it.
Technically this book is about how humans interact with things, but actually it covers a lot more topics that one can think: how humans act, err, how they make descisions, how memory works, what are the responsibilities of conscious/subconscious. Also you'll start to dislike doors, kitchen stoves and their disigners)
So now, when I hear a switching power supply whine in protest, I will think of it as the squeals of pain of the engineers whose life I turned into a living hell because of my lack of appreciation for P = IV. Im truly sorry. I wasnt thinking. (And this is just the first chapter of that book).
The Fountainhead by Ayn Rand. One of the most inspirational stories I've ever read. A strong reminder to remain true to yourself in the face of all sorts of challenges and adversity.
Mastering The Complex Sale by Jeff Thull. I don't claim to be a great, or even good, salesman. But if I ever become any good at selling, I expect I'll credit this book for a lot of that. I really like Thull's approach with is "always be leaving" mantra and focus on diagnosis as opposed to "get the sale at any cost".
The Challenger Sale by Brent Adamson and Matthew Dixon. Like Thull, these guys deviate from a lot of the standard sales wisdom of the past few decades and promote a different approach. And like Thull, a core element is realizing that your customer aren't necessarily fully equipped to diagnose their own problems and / or aren't necessarily aware of the range of possible solutions. These guys challenge you to, well, challenge, your customers pre-existing mindsets in the name of helping them create more value.
The Discipline of Market Leaders by Fred Wiersema and Michael Treacy. A good explanation of how there are other vectors for competition besides just price, or product attributes. Understanding the ideas in this book will (probably) lead you to understand why there may be room for your company even in what appears to be an already crowded market - you just have to choose a different market segment and compete on a different vector.
How to Measure Anything by Douglas Hubbard. It's pretty much what the title says. This is powerful stuff. Explains how to measure "things" that - at first blush - seem impossible (or really hard) to measure. Take something seemingly abstract like "morale". Hubbard shows how to use nth order effects, calibrated probability estimates, and monte carlo simulations, to construct rigorous models around the impact of tweaking such "immeasurable" metrics. The money quote "If it matters, it affects something. If it affects something, the something can be measured" (slightly paraphrased from memory).
I wish I'd read each of these much earlier. Each has influenced me, but I'd love to have been working of some of these ideas even longer.
On top of that, some of Tim Ferriss' stuff on accelerated learning. Learn how to learn first, then learn everything else.
Each one had a significant positive impact on my life. And both a free online!
Surely You're Joking, Mr. Feynman! - Richard Feynman What Do You Care What Other People Think? - Richard Feynman Crime and Guilt: Stories - Ferdinand von Schirach
The Master and Margarita - Mikhail Bulgakov
Bulldog: A Compiler for VLIW Architectures - John Ellis
The War against Women (Marilyn French) - the underlying premise is wrong, but reading it is a good way to learn how to deal with semi-rational, but insane theses. And yes, I can defend this position with quotes / paraphrases from the book, with rational explanations as to why it's insane
How the Police generate false confessions (James Trainum) - former cop explains why harsh interrogation techniques are counter-productive, and how to defend yourself
Get the Truth (Philip Houston et all) - how to tell when people are lying, via simple techniques you can remember
"More Money Than God: Hedge Funds and the Making of a New Elite" https://www.amazon.com/More-Money-Than-God-Relations/dp/0143...
Market Wizards, Updated: Interviews With Top Traders https://www.amazon.com/Market-Wizards-Updated-Interviews-Tra...
The New Market Wizards: Conversations with America's Top Traders https://www.amazon.com/New-Market-Wizards-Conversations-Amer...
Hedge Fund Market Wizards: How Winning Traders Win https://www.amazon.com/Hedge-Fund-Market-Wizards-Winning/dp/...
_Feeling Good_ because of the tools it contains to battle self-defeating feelings that lead bouts of sadness or depression. I wish everyone would read that book so that they can build mental immunity against circular, depressing thoughts.
High Output Managementhttps://www.amazon.com/High-Output-Management-Andrew-Grove/d...
The Master Switchhttps://www.amazon.com/Master-Switch-Rise-Information-Empire...
Thinking Fast and Slowhttps://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp...
Unbroken, Laura Hillenbrand. If you haven't read the book don't judge it by the (awful) movie.
The Liberators: My Life in the Soviet Army. Really opens your eyes to the problems and realities of communism. I love the author's dry sense of humor as he witnesses the absurdity of many of the things he encountered.
Sniper on the Eastern Front, Albrecht Wacker. A view of WWII through the eyes of a German sniper.
Auschwitz: A Doctor's Eyewitness Account, Miklos Nyiszli. A view of the holocaust through the eyes of a Jewish doctor in the Auschwitz concentration camp.
80/20 principle, while mentioned in the 4 hour work week, it really has a lot more to offer in the book. How you should go about leveraging your time. There was a real gem in there about how books are really the best way to acquire knowledge and a great way to approach reading in the university.
There was a speed readying and studying book I came across from a friend that owns a book store that really helped me. I wish I had that book before I entered high school. I can never recall the name, but I will try to find it.
I'm 30 now. I wish I had read this when I was 20. It would've made dating in my 20s so much easier. I came across it last year and it's probably the single most important book I'll ever read in my entire life, for the sole reason that understanding women will allow me to have a successful marriage one day. I cannot recommend this enough.
 Free online: https://www.scribd.com/doc/33421576/How-To-Be-A-3-Man
This book is a detailed research on what's wrong with the world and what can be still done. The chapter II brings inputs from various culture on approaches that could improve from ground up. Must read book for us and future generations.
Can someone suggest something similar to this book?
-  https://www.robinwieruch.de/lessons-learned-deep-work-flow/
Similarly, On Intelligence is an absolutely brilliant book on what 'intelligence' is, how it works, and how to define it.
2) Hooked. Although it's very formulaic, Hooked provides a lot of good ideas and approaches on building a product.
3) REWORK. If you're a fan of 37 Signals and/or DHH, this is a succinct and enjoyable read about their principles on building and running a business.
Currently I'm reading SmartCuts and The Everything Store - both of which are great so far.
Remembrance of Things Past -- I'm still reading this, as it's a massive stream of consciousness book, but I wish I'd started it when I was younger so that I'd be done with it by now. It's just so weird to read it and experience the writing that I enjoy it for simply being different. As you read it just remember that every ; is really a . and every . is really \n\n.
Van Gogh: The Life -- I absolutely hate the authors. They're great at research, but I feel they had a vendetta against Van Gogh of some kind. Throughout the book, at times when Van Gogh should be praised for an invention, they make him seem like a clueless dork. Ironically, their attempt to portray him as a dork who deserves his treatment ends up demonstrating more concretely how terrible his life was because he was different. I think if this book were around when I was younger I might have become an artist instead of a programmer.
A Confederacy of Dunces -- Absolutely brilliant book, and probably one of the greatest examples of comedic writing there is. It's also nearly impossible to explain to people except to say it's the greatest example of "and then hilarity ensues".
Mickey Baker's Complete Course in Jazz Guitar -- After a terrible guitar teacher damaged my left thumb I thought I'd never play guitar again. I found this book and was able to use it to learn to retrain how my left hand works and finally get back to playing. Mickey Baker's album also brought me to the Bass VI, which got me thinking I could build one, and then I did and now I've built 6 guitars. I play really weird because of this book and I love it. This book also inspired how I wrote my own books teaching programming and without it I'd still be a cube drone writing Python code for assholes. If I'd found this book when I was younger it most likely would have changed my life then too.
Reflections on A Pond -- It's just a book of this guy painting the same scene 365 times, one for each "day of the year" even though it took him many years to do it. All tiny little 6x8 impressions of the same scene. I learned so much about how little paint you need to do so much, and it's also impressive he was able to do it. I can't really think about anything I've done repetitively for every day of a year. I've attempted the same idea with self-portraits but the best I could do was about 3 month's worth before I went insane and started hating my own face.
Alla Prima: Everything I Know About Painting -- Instructionally this book isn't as good as How To See Color, but as a reference guide it is about the most thorough book on painting there is. It's so huge it's almost impossible to absorb all of it in one reading, so I've read it maybe 5 times over the years.
We live in a world of thieves masqueraded as leaders.
!For inspiration:! 1. Loosing my virginity (Richard Branson)- Richard Branson's Autobiography. From student magazine to Virgin to crazy ballooning adventures and space! I keep coming back to this when I feel like I need a morale boost. There isn't an audible version for this book, but there is a summary-type version on Audible "Screw it, Let's do it"- does a good job curating the exciting parts.
2. The Everything Store (Brad Stone)
3. Steve Jobs (Walter Isaacson)
4. Elon Musk (Ashlee Vance)
5. iWoz (Steve Wozniak)
6. How Google Works (Eric Schmidt, Alan Eagle & Jonathan Rosenberg)
7. Dreams from My Father (Barack Obama)
!Business & Management:!
1. The Upstarts (Brad Stone)
2. Zero to One (Peter Thiel)
3. The power of Habit (Charles Duhigg)
4. How to win friends & Influence people (Dale Carnegi)
5. How to win at the Sport of Business (Mark Cuban)
6. Finding the next Steve Jobs (Nolan Bushnell)
7. The hard thing about hard things (Ben Horowitz)
8. Start with the Why (Simon Sinek)
9. Art of the Start (Guy Kawasaki)
!Escaping Reality! 1. Hatching Twitter (Nick Bilton)-Sooooo much drama! Definitely learnt what not to do! Very interesting read.
2. The accidental Billionaires (Ben Mezrcih)
3. The Martian (Andy Weir)
4. Harry Potter Series.
5. Jurassic Park || The Lost world (Michael Crichton)
6. Ender's Game (Orson Scott Card)
7. Ready Player One (Ernest Cline)
!Other honorable mentions:! Actionable Gamification (Yu-Kai Chou) I invented the Modern Age (Richard Snow) Inside the tornado (Geoffrey Moore) Jony Ive (Leander Kahney) Sprint (Jake Knapp) The lean startup (Eric Ries) The selfish Gene (Richard Dawkins) Titan (Ron Chernow) The inevitable (Kevin Kelly) The Innovators (Walter Isaacson) Scrum (Jeff Sutherland)
!Most if not all have an audio-book version!
If you are in a startup or plan to start one soon, reading/listening to books should become a routine. I try to get through at least one book a week, sometimes two.
IMO you won't really understand the nature and limitations of fiction until you've read JLB. His work won't change your life, as such, but it will divide it into two parts: the part that took place before you read him, and the part that comes after. You'll always be conscious of that division.
How to Win Friends and Influence People.
Think and Grow Rich.
The E-Myth Revisited.
The Science of Selling.
(stuff about stoicism)
Start With Why
The Subtle Art of Not Giving a F*ck
Think Like a Freak
I had signed up for a trial of one of these drugs. Having grown up in the 80s with a severe peanut allergy that required the use of the epipen more than a few times I was keen for anything that could help, even just something that would lessen the reaction so maybe I could choke down some benadryl rather than stabbing myself with that ~2 gauge (hyperbole) epipen that now costs a fortune.
First thing they did during the trial was give me two peanut tests, skin prick and blood test. Both came back negative despite positive results decades earlier as a child.
I would have never thought to get tested again on my own. Not being allergic has made such a huge difference in my life - I was always flippant about my allergy and had developed good safeguards to protect myself. But the amount of stress is caused me was amazing to have lifted. Hope that these drugs can bring that to others (or just get tested again and maybe you're one of the lucky ones!)
I miss Snickers.
But since some genius thought the best course of action was not exposing kids to it, we have the present situation (especially in the US)
> Big Pharma was unmoved, believing it would be impossible to patent a medicine that was essentially a ground-up peanut.
> [One of the companies working on the issue] has ties to the food industry, which has a vested interest in finding treatments for allergies.
Make of this what you will.
Going by first principles it's probably under $100.
2. I think this treatment could potentially be a lot more deadly than the problem. It's a case of unintended side effects...in this case over-confidence vs. a strict no peanuts ever policy.
Imagine a kid has had the treatment and can eat 3 or 4 peanuts without a problem...a year or two later, maybe older & wanting to show off or over-confident, the child eats 12 peanuts...has an anaphylactic attack and dies. This case vs. a child who religiously avoids all peanuts, never considers them. A possible unintended effect related to psychology that I think should be considered.
3. Antibiotic use in infants and toddlers have been linked to a higher risk of allergies (maybe peanut allergies though I don't know if that's been studied & maybe that's why peanut allergies are a big problem in the US but not in India where most people don't take a lot of medicines)...try to avoid them if possible in small children though please listen to your doctor's advice. Antibiotics have some side effects which aren't great but on the whole have saved an incalculable number of lives.
About 2 per cent of American children are now allergic to peanuts, a figure that has more than quadrupled since 1997.
>VCs, at leas the best ones, are there for your company in good times and bad. There is a difference, trust me.
This part made me laugh so bad, my entire body hurts. Now, I know pretty well who he is, thus why he may not realize how much bad VCs hurt the industry, especially when a vast number does exactly what he said: leaving as soon as there are any signs of trouble. And I know that from experience.
Now, do I always agree with ICOs? Not really, selling promises that particular way is bound to lead up to some disappointment from one side or the other. But they are perfectly valid to fund yourself if your service, your core business model can be compartmentalized that way. As he said: "The token that you sell in your ICO is the atomic unit of your business model."
And the tokens they are buying is just a promise - they don't bind the company to do any thing. The company promises to use the BAT token in their future monetization model - but in fact they can pivot at any point, like many (if not most) startup do, and do something else, or maybe even do yet another ICO with another token.And that is on top of all the problems with crowdfunding - where even if there is a legal binding, and maybe a fractionary ownership - without all the regulations that were invented to protect the investors - the founders/executives still can do anything with the money they received from the funding event: https://medium.com/@zby/the-problem-with-crowdfunding-81b53f...
Update: Even if now most ICO creators are honest - then soon they'll be crowded out by scammers, because honest funders will find other ways to fund their starups - but for scammers it will never be easier than with ICOs.
This is backwards. Raising via an ICO mean no VC can ever push you out as CEO or take control of your company. Their priority is making the company get to a big exit, with or without you. When Ev was running Twitter, Fred wasn't a fan of his and had not problem plotting behind his back with Jack and eventually pushing him out.
However, vc is a hundreds, if not thousands of years old. Fine tuned and tweaked in the 80's on to the dot com and thru facebook & google and beyond.
ICO is just a few years old - it's typical for the disrupted to not feel threatened until it's too late. This, in my mind, is the real model going forward. Fuck pandering to Sand Hill road or SV at all - launch an ICO from anywhere in the world on your whitepaper and testnet (shaky???) dev...
ICO's may seem ridic, but this is just the beginning. We are now running at internet speed and there's gonna be a point (soon? who knows when) where this stuff is going to all be automated and 24 seconds will seem quaint.
Welcome to high frequency funding.
I have raised VC, PE, and debt. Early-stage and late stage. Here's my take on "ICO vs VC."
For the investor, they are akin to commodity futures trading. The underlying value of the token is nil, as is the degree of control over the underlying property. But returns from price speculation can be very rich.
For the issuer, they have the money virtually without strings attached. There is no other form of assistance and no loyalty implied in either direction.
For example, I'd be shocked if there were positive "operational" returns from a token like the Brave coin. For that to happen, Google, Facebook, and the rest of the ad industry would have to grant sanction to the vendor of a Chromium-based browser startup yo turn the entire industry on its side. I doubt it. Seriously.
For the investor, they get some modicum of ownership and control of the underlying property sometimes not much but usually a lot. There is an implied responsibility to help with follow-on funding, but nothing solid. The investment is risky but not speculative.
For the issuer (of preferred shares AKA the company), they get the money with all kinds of strings attached. If the VC is top-tier (e.g., Fred, Kleiner, NEA, etc.), significant branding, easy intros, and many other benefits can accrue. If the VC is less prestigious, the operational impact is more neutral. (No VC can make your company grow or be successful that's on you.)
1. If I could pull off an "ICO" (bad name) at my next company, I'd do it immediately. Great upside and little downside.
2. I say "immediately" because I don't think that this vehicle will last long in its current unregulated state. There will be failures. There are enemies. There will be evil deeds (fraud), and those deeds will involve unaccredited investors.
3. The ICO will be a short-term speed bump to VCs.
4. That all said, who wants to join me and start an "ICO production" company to create the coins and infrastructure for them to do their own ICOs? Speed is life, and I know some VCs... :)
- there is no guarantee of limited supply of tokens ( no promise that BAT will be limited )
- there is no guarantee that company will not come up with secondary token (ex: advanced attention token)
- Also there is no indication of what 1 BAT will get you. All calculations etc subject to change
Also, how else you think BAT sells so fast if it wasn't for institutional money
A national currency is an irredeemable medium of exchange, made valuable because it's -- by law -- exempt from capital gains tax (it measures capital gain), and because it's been given legal tender status.
Why would anyone trade an irredeemable currency issued by a private corporation? "Currency" is surely a misnomer, because no one would want to buy or sell goods and services in exchange for it, which makes it more like irredeemable equity, which makes no sense either.
Ownership of PTOY carries no rights, express or implied, other than the right to use PTOY as a means to obtain Services, and to enable usage of and interaction with the Platform, if successfully completed and deployed. In particular, you understand and accept that PTOY do not represent or confer any ownership right, stake, share, security, or equivalent rights, or any right to receive future revenue shares, intellectual property rights, or any other form of participation in or relating to the Platform, and/or Foundation and its corporate affiliates, other than rights relating to the receipt of Services and use of the Platform, subject to limitations and conditions in these Terms and applicable Platform Terms and Policies (as defined below). PTOY are not intended to be a digital currency, security, commodity, or any other kind of financial instrument.
So.. by buying the tokens, you are getting, nothing, basically.
You have a sufficient understanding of the functionality, usage, storage, transmission mechanisms, and other material characteristics of cryptographic tokens like Bitcoin and Ether, token storage mechanisms (such as token wallets), blockchain technology, and blockchain-based software systems to understand these Terms and to appreciate the risks and implications of purchasing PTOY;
You've also got to fully understand blockchains.
You have carefully reviewed the code of the Smart Contract System located on the Ethereum blockchain at the addresses set forth in Exhibit B and fully understand and accept the functions implemented therein;
You've also got to be an expert programmer fluent in all ethereum's security weaknesses, plus you better have a disassembler handy to reverse engineer their compiled code (they don't provide any source code - plus, 'Exhibit B' doesn't event give the contract address anyway)
You are not purchasing PTOY for any other purposes, including, but not limited to, any investment, speculative, or other financial purposes;
Sure, sure. That's why people are buying these things, right?
It goes on... you also agree to indemnify the company against everything, all warranties are disclaimed, no liabilities can be held against them, you waive your rights to legal actions against the company, or any class actions (you must agree to arbitration). Oh, and they naturally reserve the right to modify these terms at any time without notice.
No-one in their right mind would agree to these kind of terms, and yet they are common across many ICOs. It is madness. And I haven't even mentioned their proposed application (healthcare on the blockchain) which is dumb in so many other ways.
Quite simply: No one is going to rewrite the Linux Kernel in Rust. It is far too big and also you are not solving any real issues either. Rust only protects you from a small fraction of errors and while for an application like a browser, this can be a big gain, I would argue that it is negligible for a kernel in general. Reasons being that all the device IO, component interaction, privilege escalations, logical errors, hardware errors, firmware errors/bugs all can NOT be addressed by rust. Even for a browser, Rust is only a band-aid. The amount of logical errors and security holes in something as complex as a modern web-browser is more than enough of an attack surface. No need for a rouge pointer to weird memory.
What is MUCH more viable though is a project to compartmentalize the Linux Kernel into HVMs. I forgot the name but there are efforts to put nearly everything into its own HVM. Which means if the printer driver goes nuts, it can't really do anything to your system except not print anymore. If your graphics driver goes nuts, well then you won't see anything... And so on.
This means, almost no code rewrites and still MUCH higher protection than RUST. Rust does not compartmentalize. If any of your system components is fucked, your whole system is still fucked. That is why it's pointless to rewrite a kernel because of a language. You need to compartmentalize it...
Look at QubesOS for an early user-space effort. Would be nice to have a Qubes-Kernel too.
Discussion about whether this is a good idea or not, and the problems of doing so, can now commence in earnest, and without the pesky problem of it being mostly theoretical.
On the off chance the author is reading the comments here, there are a couple security features they need to add for it to be more complete (I'm probably missing some things).
1) They should add a call to access_ok(VERIFY_WRITE, pointer, mem::size_of<Syscall>()) in rust_syscall_handle to ensures that the user space pointer points to valid userspace memory of the right shape for the syscall type.
2) They should sanitize (hopefully, using some zero-cost method) the Syscall type to guarantee that it is well-formed before calling handle. If the userspace constructs some normally impossible to construct Syscall, it's unclear how a rust match pattern handles it (i.e., if an enum has only 3 types and you pass in a falsely constructed variant with tag 100 is match guaranteed not to just be using a jump table and accidentally jump 100 instructions?)
If you rewrite the Linux kernel in Rust, one module at a time, you'll just end up with C written in Rust syntax, with raw pointers all over the place.
Most of the headache was build toolchain integration stuff. I did manage to get a few simple things slotted in that appear to work transparently, which made me hopeful for future more complicated things to play with like new process mailbox implementations, etc.
Anyway, great write up and a cool project!
BTW Redox OS (written in rust) originally supported Linux syscalls which I thought was a super cool feature.
Jorge Aparicio started steed, a libc implemented in rust for rust. It seems like a pretty interesting approach.
Eek! That is, indeed, unsafe.
But this shouldn't be messing with the entry asm at all. Just add your new syscall to the syscall table, just like any other syscall.
Now a kernel for a new OS, that'd be something.
Then I watch a SpaceX livestream and realize, eh, not so much. ;)
https://youtube.com/watch?v=PFoOqqSIYpw around 22:28 mark
In fact, the speed settles at 18,000 km/h after the landing.