The service was provided for free to the government, the company organised free training sessions for government clerks, and it was the driving schools who paid for the service when dialling into our minitel servers.
It technically wasn't a monopoly, because the driving schools could still go down to the prfecture, and do everything using the forms/pen/paper.
The company tried on a number of occasions to get the driving schools to move from the Minitel service to the new web version. Every single time, there was a huge push-back from the driving school unions, about how expensive the new service was, and how "unusable" the website was, compared to the Minitel.
We even had people calling in, saying that we were extortionists. "We've been using this for over 20 years, and never paid a cent; now you want us to pay xx a month?" I guess some of them really didn't look at their phone bill.
I heard about 4 or 5 "planned terminations" of the Minitel service during my stint from 2010 to 2015. France Telecom/Orange even provides a "Minitel over IP" service these days, where a website can be enrolled into their payment service, and users pay per minute on the website. It's a superb scam tool (just have a hidden iframe open a pay-as-you-go page), and Orange is constantly fighting the fraudsters.
There was the equivalent of the hug of death multiple times every year when students were checking their results, overloading the servers as (tens/hundreds of) thousands of people tried to furiously dial in simultaneously to get results from various nationwide exams such as the infamous Baccalaurat.
In 1981 for the presidential elections, the result was broadcasted live on the Minitel and showed up live on the news:
(On TV) https://youtu.be/rJHUZNlO9ao
(Remastered Minitel output) https://youtu.be/JIZ_D34J3-I
It was a great time and I learnt a lot about the geek community, the sharing and got access to many games which at age 15 I could not afford.
This is in part because there are still macros such as PyBytes_GET_SIZE which directly access struct members, and these macros are part of the stable interpreter ABI. That doesn't mean small integer opts and such for length fields aren't possible, it just means they can't happen for Python 3 any more. Tagged pointer would probably break too much code to ever happen.
well known as it may be, people are still surprised that bytes() requires at least 33 bytes (due to the implicit extra NUL byte), an empty string is around 50 bytes and every item in a dict or set takes between 30 and 70 bytes. All this overhead adds up.
Borg works around these problems with a simple hash table (straight out of the text book, with some associated issues). Even though that one is in itself inefficient in how it uses memory, it still only uses a fraction of what the equivalent dict in CPython 3.6 would use. I recently added a similar, pure Python construct in another place (borg mount).
Anyways, there are many products kept out of the GST purview  and therefore, this is like release 0.1a of GST. The idea that it will spur economic growth with the projections given is an exaggeration.
That means, basically, converting it into total "callback hell", so that functions don't return, they just call other functions with callbacks.
(There's a special "exit" callback that is implicitly the callback for the main function.)
So basically in Scheme, your code is always in a callback-passing style, just that the language cleverly hides it, and then lets you explicitly access the current callback (using call/cc).
Callbacks can of course take several arguments. Most languages have an asymmetry where functions have several parameters but can only return one value. With continuations, it's easy to imagine calling them with several arguments. So in a language with continuations, it makes sense to have multiple return values too.
1. Write some JS code in chrome and verify expected behaviour. 2. Add a debugger statement. 3. When the debugger pops up, go down a few frames and add debug point. 4. Right click the frame and select restart. 5. In the new break point, write some code in the console to modify the state. 6. When you step through the code, it now does something else.
I leave the rest to imagination.
if you look at it from the assembly perspective, its just a jump thats been augmented with state (the closure) and additional parameters. i think trying to describe it as a snapshot, or multiple returns, is confusing since it describes them in terms of their stack behaviors.
the easiest way to think about them is to add an implicit argument to each function, which is the place to return to (jump to with the context, augmented with the return value). call it c. return x is c(x).
there is no longer any stack or implicit return target (above me). removing that common control flow assumption lets you make all sorts of different plumbing and get into an arbitrarily deep amount of trouble (the good kind and the bad kind).
call/cc has a pretty natural implementation in this model (heap allocated activation records)
but as someone else mentioned if you choose the simple continuation model, that makes a lot of choices for you in the runtime and the compiler. common complaint from compiler land is that it makes it difficult to reason about reordering later on. you also lean really heavily on the gc to clean up the frames that the stack was taking care of for you (see charlie on the mta)
If the continuation could be resumed at most once, this would be more like suspending a thread/fiber and resuming it later.
You can reimplement low level control flow with this but generally it is mostly useful as a reinversion of control. Some code (like async IO) expects callbacks so you lose control over the program flow which makes composition difficult. You can reinverse this by using futures which often just wrap continuations.
Imagine running a program in a VM. You know how you can take a snapshot and then restore to it later? That snapshot is equivalent to a continuation.
Another way of phrasing it is, it's your program frozen in time. You can snapshot your program and restore to that point later.
To put it technically, step through each call stack frame, serialize all the local variables, and you have yourself a continuation. To invoke it, call those functions in order and set the local variables to those values, then set the program counter to wherever it was. (You don't literally do this, but maybe that makes it easier to understand what's going on with it.)
The confusion: What about a database connection? Or a network connection of any kind? An open file handle? Etc. The answer is that those things can't be saved in a continuation.
The way that this works in Scheme is that there's a special primitive called "dynamic wind". It takes three callback functions: "before", "during", and "after". It invokes those callback functions in order. If execution leaves "during" for any reason whatsoever, then "after" is invoked.
Here's the kicker: If execution goes back into "during", then "before" is invoked. I.e. if you save a continuation inside "during," then "before" is the place that you'd put the code to re-initiate a database connection or re-open a file handle. Or fail.
And of course, no discussion of continuations would be complete without the argument for why call/cc is generally an anti-pattern: http://okmij.org/ftp/continuations/against-callcc.html
Yet "On Lisp" presents several interesting ways to use them, and they are extremely powerful. One particular use is that you can implement cut, fail, and mark for non-deterministic backtracking. If you've used emacs lisp and you've ever written an edebug specification for how to debug a macro (https://www.gnu.org/software/emacs/manual/html_node/elisp/Sp...), some of the more complex features require backtracking: https://github.com/emacs-mirror/emacs/blob/0648edf3e05e224ee...
That's an area where continuations really shine, because the implementation can be just a few dozen lines compared to hundreds.
(elisp doesn't actually use continuations -- this is just an example of the territory they're useful in.)
What it does is show where every asset on a web page is loaded from. It allows you to visualize how many different requests go into building just one web page. While it's gotten much better, the Houston Chronicle (https://chron.com) used to make about 500 individual requests to build its home page. It's down to about 125.
It's best to run it across two different monitors, with IP Request Mapper on one monitor and your "normal" browser window on another. Then enter any URL and watch the map start populating based on the geolocating every request made by the page.
But it's projects like ipinfo.io that make these other things possible. Standing on the shoulders of giants and all that...kudos to you, coderholic.
I solely marketed it at Stack Overflow and was getting upvotes and that was all my marketing.
Also a big factor there are good reviews. When users like your project/product, they will market it for you.
What if spending money on marketing had made you grow twice larger? Twice faster?
When people say "I didn't spend money on marketing", the only translation is "I knowingly overlooked massive growth opportunities."
I read that you use Elastic Beanstalk for your server config, but I wanted to ask:1. What programming language did you use?
2. What, if any, configuration did you have to do to the Elastic Beanstalk config to deal with network spikes and autoscaling?
- https://db-ip.com/api - https://ipapi.co - https://freegeoip.net - ipinfodb.com - https://www.iplocation.net - http://neutrinoapi.com- http://www.ip2location.com- https://www.telize.com
and a few dozen more. I wonder if collectively they are serving over a few billion requests per day. Microservices & API culture FTW !
I mean, you might spend 20 minutes more to set it up, but you are safe from having to rely on 3rd party service.
Anyway, kudos to coderholic for creating this and sharing the story.
However, it is important to acknowledge that he did put himself into a position where he was available to become lucky (= he built the API and linked to it).
Minor nit, but with that level of traffic I'd expect you to be bragging about P99.99 latency, not P90.
Keep improving this and with the rise of web personalization, the demand will continue to grow.
- I somehow can remember that domain. I don't have to google "my ip" and dig through weird domains that change all the time
- The design is clean and simple. Not too many information, no ads, loads fast.
Building for what folks want, even developers, is so obvious that I think we often forget about it. It's also not as glamorous as self driving cars or rockets, so gets discredited easily.
Sound points though
Essentially, a blocking script in the dom <script src="...api.js" /> that prepopulates the window object. With clever error handling, this could improve perceived performance significantly.
A few questions:
1. What differentiates you from ip-api.com and other providers?
2. Do you use MaxMind?
3. Is there an option for no-throttling? 100s of simultaneous requests?
I aggregate multiple IP databases for my SaaS (https://www.geoscreenshot.com) and I need highly performant / reliable IP look ups.
I got inspired and start researching and Building. ( btw failing yet)
I am ready to launch a startup and currently trying to figure out what to focus on (so many ideas!).
I posted an "Ask HN" earlier today. Wondering if anyone might have some thoughts or advice on this:
400 * 250M / 320K = $312,500 per month.
Or $3.75M per year.
Not counting the expenses.
* It looks like exceptions carry no error information. When something goes wrong, you know nothing. Is that right?
* Calling finalizers from GC is usually troublesome. They get called late, so they can't be relied to close files and such. They also have to be prevented from making trouble by doing things you can't do during GC, or "re-animating" the object being deleted. How's that handled?
* The notion that variable type is established at initialization is becoming mainstream. How about extending that to structures? The fields of structures could get their types inferred from the structure constructor. (There was a statically typed variant of Python, Shed Skin, which did this.)
I've found that the capability system is both the most exciting part of Pony and the most difficult part to grok for a new comer.
After O'Reilly moved to DRM-free books, their 2009 sales went up by 104% http://toc.oreilly.com/2010/01/2009-oreilly-ebook-revenue-up...
In other interviews, he seemed confident that DRM wasn't worth ithttps://www.forbes.com/forbes/2011/0411/focus-tim-oreilly-me...
Perhaps some part of the equation has changed since then. I'm looking forward a deeper analysis of the business reasons for this.
I'm also interested to hear what more authors think - I wonder how many agree with Martin Kleppmann (Designing Data Intensive Applications) https://twitter.com/martinkl/status/880336943980085248
This independence day weekend there were a lot of sales, so I purchased:
* "Programming Clojure, Third Edition" from pragprog (30% off sale)
* The entire collection of "Enthusiast's Guide to ..." from rockynook (each for $10)
* "The Quick Python Book 3e", "Serverless Architectures on AWS", "Event Streams in Action", "Get Programming with Haskell" from Manning (50% off)
These sales are the only way I can afford the volume I read. Some of that money would have gone to OReilly authors, but they deleted my full cart with $100 worth of stuff before I could purchase!
EDIT: OReilly catalog seemed large & redundant with publishers (packt) offering the same materials on their sites. Some like Wiley / MKP only offered very few items from their catalogs. Others like Rosenfeld / rockynook / no starch now provide DRM free options directly from their sites. I'm hoping at least OReilly reconsiders selling their Animal books again.
1) Book sales have been consistently declining overall, in all media. It's not clear that DRM has much to do with this.
2) They'll still be selling DRM-free through at least one merchant, Google Play. (It's not clear whether this policy extends to Amazon as well, but they wouldn't be the first publisher selling DRM-free there; Tor's science fiction novels have been DRM-free through all merchants for a few years now.)
This is definitely true for me, and one of the reasons why Oreilly is one of the defining "colors" on my bookshelf. In particular with technical literature I really need to get a good look into the book before I make a commitment and a decision to buy. I just don't want to spend money first and then stick to something that turns out to be rather disappointing.
So my usual way of buying books is to download various publications on a topic via Bittorrent and then buying the best one once I know what I want. This is similar to going to a public library, getting a few books, and buying the most convincing one for long-term use. If there was a micropayment way of paying for the short-term evaluation, I would be more than happy to pay for that (as I implicitly pay via library contributions, which go to the publishers to some part).
Having said that, Oreilly traditionally had a market of being the "printed out manual of open source software", which I'm pretty sure is dead by now, and I wonder if they can reposition completely. One thing I noticed is that they now often sell books that have titles that sound very general "Data Science for blabla" but turn out to be really just tutorials/manuals for some particular framework. That's the kind of book I would want to avoid. Nothing against good examples, but I don't need printed out tutorials.
Oreilly author here. FWIW a ton of us were caught off guard by this as well. At the same time, I can't say I'm surprised.
My commercial incentives for working with oreilly wasn't about the book per se. I found a ton of value in working with them for their peripheral activities including safari, their strata , and AI conferences. I think other folks who write for oreilly tend to do these same things.
Pointing out where oreilly is making money: It tends to be large companies paying for access to safari now.
They will be putting other content in there now.
Print is a dying media. That being said: A ton of people prefer print still.
I don't think any end user or author of their's is "happy" about this per se. 1 benefit I liked of the online store was the ability to point people at that for pre releases and updates.You can't really do that with amazon.
I may be naively hopeful in saying this, but..
That being said: this should allow them to invest in other distribution channels now as well.
Oreilly showed they know how to run a distribution channel and may use that expertise in other areas.
As someone closer to this than a lot of people, I'm happy to answer general questions about the process, other ways this could affect us etc, if that helps.
If they stop selling then they lose 100% of my business which is about $100 a year.
I don't use other formats. They screw technical books too badly. Some other publishers like Apress and MS Press do okay too but if O'Reilly pulls out then it's quite a blow.
I asked him about it and I think he'd rather have the latter benefits than the minuscule compensation:
"Piracy is a double-edged sword. On the one hand, it means you receive no compensation for the benefit readers get from the work you put in. On the other hand, pirated books act as implicit marketing, expanding awareness of you and your book(s)."
So I bought the books, but asked if he had another copy of his own book. He said that he did not, because that he guesses he should keep a copy as, after all, he was the author. That's a lot of trauma for him to say something like that.
Without more data (or really... any), this conclusion is pure speculation.
Is it just Google has more weight they can throw around and O'Reilly didn't want to 'rock the boat', or was it a technical problem?
Apparently the Google Play versions of O'Reilly books are formatted strangely and aren't a direct PDF of the physical books.
The point I've emphasized above really matters a lot for people who do read many books. Despite select chapter previews that some publishers provide, there are people who really want to do their own evaluation of something before committing to buying it.
> My feeling is that most people who choose pirated books are unlikely to pay for them, even if that's the only way to get them. As such, I'm inclined to think the marketing effect of illegal copies exceeds the lost revenue. I have no data to back me up. Maybe it's just a rationalization to help me live with the knowledge that no matter what you do, there's no way you can prevent bootleg copies of your books from showing up on the Net.
Again, the emphasized sentence above has been known for a very long time in the areas of music, movies, TV shows, books - any content, actually. In my observation, people who pirate books also tend get into a habit of hoarding rather than reading (low disk/storage and bandwidth costs). Leaving aside the people who are in countries with poorer currencies and cannot really imagine buying a lot of the English language technical content produced, I doubt if the real loss in revenue is even substantial.
Books also, depending on the subject, require investments of time, attention, memory and repeated reference, unlike movies, TV shows and music that most of the time require a "one time investment". So I would not consider books to be in the same category as others when it comes to piracy.
I'm not at all happy with O'Reilly's decision, and did write to support at oreilly saying that this makes it difficult (finding DRM free content on amazon or elsewhere in multiple formats) and that I wouldn't be buying O'Reilly products again. I received a standard reply thanking me for the feedback and pointing me to the blog post. My guess is that the direct customer relationship and brand recognition through its website is going to be lost along with this decision.
I don't know if O'Reilly will change the decision, but people who do value the freedom of DRM free content in different formats must voice their opinions by writing to O'Reilly support.
My work bought a copy of Designing Data Intensive Applications for the team, I've started reading it but lugging around 1kg of book every day gets old really quick, I wish they would have offered a PDF download coupon or something inside.
I always appreciated that they were available in any format (pdf/ebook etc) and thus easier to search. You can even sync them to your dropbox automatically after purchase. A download them.
We had a book service at a former company and it was terrible, one page at a time with a clunky web interface. Being able to download and scan them was much appreciated.
But as internet search gets better, you find quick solutions on stack overflow. It must be hard selling books.
Maybe some did, I don't know. I suspect that the ones that didn't, probably wouldn't buy a similar book from anyone, not because of piracy, but because they just aren't into that particular kind of product. So no (real) harm, no foul.
Blog post here: https://www.linkedin.com/pulse/oreilly-mission-spreading-kno...
As a side note, I really like the way the author of this blog has constructed his "Contact Me" bit at the bottom, it's very intuitive, clever, and uses URI schemes where appropriate, nice job!
Most researchers/academics lie somewhere on this spectrum. (Well I guess most human beings involved in any activity probably).
On the one end are salespeople who love to make a mountain out of a molehill they just discovered. On the other, slackers are like the perfectionists who never get anything done because they never resolved their analysis-paralysis.
There are very few who are exactly in the middle of the spectrum. The middle is a point of unstable equilibrium. You have to work very hard to stay there and can easily fall off to one side or the other.
As do most of the Google Translate pieces, even though I get the feeling that automatic translation of texts is now seen almost as a solved problem (it's not): all that Google Translate does is change some text from some original language to a second one, which is not a real language, it's just a language that's sometimes very close (grammatically and lexical) to one which the agent/user knows.
The idea is that we should try to look harder and have fairer judgements about the actual results and not get stuck on the methodologies.
Anyone else thought that this was very weird? The author appears to be complaining about the fact that reputable people/labs can post a PDF on arXiv and be taken seriously. How is this avoidable? Without arXiv, they could just post the PDF on their website or anywhere else.
The "risk" associated to publishing crap on arXiv is the same as always: have people notice it's crap and get a bad reputation. I'm not sure what ideology has to do with it.
"Let the market decide!"
"Ramazan went to the family's home to apologize, only to be greeted by the father, Emine, two sisters and a lot of very sharp knives."
There's no technological way to fix people that would try to kill someone over a text misunderstanding without figuring out the truth first. People like this are garbage-people murderers, let's not blame tech mistakes for the fact that some people are scum. Everyone involved knew damn well that a couple characters would make the difference between a benign text and an offensive one, and frankly, even if the text was offensive, murder was not justified. Scum.
For me, distinguishing between text as something that is intended to be read by humans and strings as serial sequences of characters that may or may not be human readable but will be processed by one or more computing automata is useful. For example in C, the string "Hello World" is terminated by a null character. The null character is not part of the text the string encodes.
Or to put it another way, I find that treating strings as text as two different layers of abstraction clarifies my intent. Code that manipulates text is built on code that manipulates strings and in between there's parsing that has to occur.
By this point Microsoft had also supplied the BASIC for Perkin-Elmer's very expensive Model 3600 "intelligent terminal".
These were more than just terminals capable of limited client-side processing for time-sharing systems. As stand alone desktop units they were powerful enough for scientists to acquire data, store it, process it and/or control data-intensive instruments which a year earlier had required a large non-desktop host, often a forklift model.
The 3600 was the first desktop to actually resemble an early IBM PC, horizontal unit having two prominent 5 1/2" floppies, and with the detachable keyboard (also introduced softkey F-row) & monitor.
Other than that the box was simply a microprocessor & memory with two RS-232 ports (known as serial COM ports ever since later release on IBM PC's), and an external instrument connection, not supposed to be expandable or upgradable internally.
ROM bios booted (without a floppy) to a novel Disk-Based Operating System known as the Perkin-Elmer Terminal Operating System (PETOS) where a few of the DOS commands were available from ROM, but the remaining majority of the commands were expected to be present on a floppy residing at DISK0. DISK1 was expected to usually be employed for application program & data storage.
These were proprietary Perkin-Elmer programs to interface with their own scientific instruments but many users wanted to develop their own programs in BASIC like you could on the competitive Hewlett-Packard equipment using their built-in HP BASIC.
The team that had designed the 3600 had of course been separated into numerous more rewarding projects.
Anyway Perkin-Elmer got Microsoft to provide the BASIC for the 3600, and and as we all gained deeper knowledge of PETOS by operating this equipment, it really helped later when the IBM PC was launched because its DOS had such an uncanny similarity.
I should mention Gates wrote the Open Letter to Hobbyists. Of course, this was before "open source" or "free software" even was a term. At the time, CP/M had a "BDOS" supplied by DRI with the OEM having to write the "BIOS". There were firms such like Lifeboat which had writing the BIOS as part of their job.
Thinking about it, I wonder if it would have been feasible for Kildall to work on CP/M full-time at Intel and release CP/M source to public domain instead. It would be nice if CP/M was the first thing ported to 8086 back in 1978 for example.
Hmm. Maybe the bootcamps do give you a more realistic preparation than I had previously thought.
I think all of it is wonderful. Knowledge about software development is much more accessible than, eg, architecture. And much more aknowledged without a degree.
But, with the myriad of paths, come more risks of one choosing the wrong path, both from honest mistakes or being mislead by dishonest people. And these mistakes can be very costly.
I have a hunch that sometime in the future almost all knowledge will be able to be learned like this. It is a good thing to study the advantages and disadvantages of this environment.
What struck me as odd was the President/Founder of the bootcamp first hired me to do my job and observe the videos of what their past instructor did. So I did and found out he was too aggressive in his approach to teaching his students. He was talking too fast and went off in some awful tangents that wasn't on the itinerary. I told the founder, my boss at the time, about what changes I could do to make it less aggressive and with a flowable and presentable approach to the fundamentals of front-end development and he agreed.
I find out the real reasons for my services to my students is to not just give them a new career perspective, but to also have the boss read the idea proposals his students have as startup ideas and he would be the bridge to their funding. Except, he's the end of the bridge as an angel investor looking to own a huge chunk of their startup property before he prepares his elevator speeches to real angel investors. An another thing is he's also aiming veterans for their GI bills to be spent at his coding school.
I left after a month. I felt dirty and icky just looking back at it.
This is the part that don't quite understand. To me, the period when you search for jobs is exactly when you should be working as freelancer or working on side projects.
Working helps you get into the coding zone so that you are more prepared for coding sessions during interviews.
Working helps to add your resume and things to talk about when the interviewer asks "what have you been doing recently?".
Working should also be the natural next step to do after completing a bootcamp to put all the knowledge into practice.
I can't think of why it would need a full-time commitment to send CVs, practice algorithms and attend interviews.
> Also, we noticed that at least four students happened to be married to programmers, and at least seven others had parents, siblings or other important people in their social circle who were programmers.
It makes learning Computer Science in university less valuable, since university graduates lack knowledge in specific programming languages/tech stacks. It's like saying, "You know all that stuff you learned about memory management, virtual memory, I/O subsystems, CPU schedulers and so on? Yeah that doesn't matter if you don't know Angular X.0"
Does anyone else see a problem with this?
The metrics they measure against are money and job based. Which gives me the impression that we either don't know how to determine if people know how to program, or that no one really cares. We need some kind of standardized test or similar. I know of the Advanced Placement CS exam, but we need something more practical and not focused on a particular language.
I have taught a coding bootcamp 2 times now. The selection/recruitment of students -- not done by me -- was fairly loose in some cases. The only thing the bootcamp selection was strict on was that people completed their undergraduate degree (which is financially easier to do in Europe).
What I noticed is that there are people who will be successful. These people somehow have a strong background in logical thinking. It can be by doing predicate logic in philosophy courses, being data oriented from your biology studies or even being hardcore at knitting .
Another thing I've noticed is that they learn quick and won't ask a lot of questions. They will use a search engine and will only come to you with difficult questions. They know how they learn quite well -- at the very least.
The final thing I've noticed is their determination. Some people come to a coding bootcamp with the expectation of "fix me, I need a job in this industry." The most successful students don't take anything for granted and know that they need to learn everything they can get their hands on.
What I noticed is with students who aren't successful is that they don't think logical -- their attention/focus does not allow it unfortunately. They may or may not ask a lot of questions. And they will completely fail the bootcamp if they don't ask fellow students for help. Also interesting is that they did not learn the terms or basics well enough.
In one of my groups I've noticed that one of my best students was coaching one of the least successful students. It was intensive, and the explanations and questions were sharp. Yet, by my surprise, it didn't do much. At the end of the bootcamp the student was still one of the least successful students. It makes me believe that not everyone can do a coding bootcamp, since the right mindset is required to start one. Teachers can only help you when you're open enough to receive the knowledge. I don't know to what extent that idea is true, but I want to find out.
__Unsuccessful Students Becoming Successful Students__
I have seen not so successful students become quite successful. They had an insane amount of grit and understood how they needed to learn the material. Compared to their other unsuccessful counter parts (in the beginning at least), these students are more structured and disciplined.
__When To Take A Bootcamp__
Even for not-so successful students it can be a great tool, provided they self-teach 1 to 2 months after that. However, unfortunately, it is not for everyone. If you don't think in a logical fashion, learn fast then at the very least you need to make that up in determination -- or in time after the bootcamp. I feel a bootcamp is as tough as nails if a student only learned bullet points from presentation slides during their academic years. Programming is more akin to learning a music instrument (i.e. it takes practice), and not everyone has experience learning that way.
 (following knitting instructions is very algorithmic -- it's a whole new world if you've never seen it. Here is an example of a pattern: http://vintagecraftsandmore.com/wp-content/uploads/2012/07/V...
They would submit a lot of pull requests, but upon review it became very apparent that documentation was not being consulted resulting in unnecessarily hacky solutions. The reason: programming by trial and error.
In the face of that, the only thing I could do is: give them the benefit of the doubt by asking them to walk me through their problem solving approach. This insulted them, because there was no problem and solving approach, just bruteforcing code with live reload until the feature worked.
Unfortunately, sometimes the feature did not actually work, or would not handle edge or error conditions which caused the program to be unstable. Sometimes to the point of causing a live incident.
Since coding camp graduates graduate very frequently and because of referral bonuses, they were a majority in our team. They used their majority to deny code reviews (not allowing people to mark their tasks as finished), and took turns to pull off microagressions in a round robin manner so nobody is accountable enough to be retaliated against.
In the end, these people know they will not prevail through technical excellence but rather by pumping as much code as possible and by playing dirty: refer a lot of friends, become a majority, avoid situations where a relative rank can be established and bully any opposition until they quit.
That's what you need as an indie developer. A lot of publicity and great art. It doesn't matter that you essentially reproduce the exact same recipe as any other Nintendo platformer, as long as you have great art and music, people will love your content.
Similar to hollywood movies being a rehash of each other with different actors and a different setting. It's all about the human condition.
So yeah, in the rare case of Cuphead, they will do well. Maybe enough to make the next one, if they didn't over-extend themselves.
For most small indies though, I can tell you first hand, it's tough. Especially since the whole Hello Games and No Man Sky's debacle, every 'gamer' out there is out to hate indie by default. We're scammers in sheep clothing.
The resemblance to the cartoons of the era is uncanny. I really hope Disney doesn't sue them or something. Hopefully they're on rock-solid legal footing. In a just world all those copyrights would be ancient and long expired, but heyMickey Mouse is still Disney's IP until 2023, and they'll probably figure out something to keep it going past that.
A positive outcome does not equal a wise choice.
The ability to fully understand the CSS Box Model, and enlighten someone was the pinnacle of your CSS Ninja tactics.
There were the purists who would not write anything but in "Strict Mode". Debates on why "transitional" should be that - transitional. There were moments of triumph when Tantek himself replied to your email, discussing his tan-hacks.
Then, there was my CSS mentor/superhero - Philippe Wittenbergh. I would email him right away to test my work on the Mac browsers, especially with that nasty Internet Explorer for Mac. This was before I was introduced to the Mac in 2006.
Soon enough, writing up a CSS-Grid system for a project became second-nature and felt like ringing the bell. Hell, I even had a dead-simple CSS Grid that got some traction, which I later added to Github and people liked it, used it for a while.
It is such a good feeling that "Grid" is no longer just a CSS terminology to mean a setup of patterns to build a layout but now a keyword in CSS that the browser understands and do something. We have come a long way.
1. I think, the early 21st century - 2001/2003-ish.
Perhaps a fallback like, oh I dunno, <TABLES>?!
In the past she wrote a CSS grid polyfill , but sadly it never got updated to the latest version of the spec. It'd be an easier sell if you could polyfill it reasonably on some older browsers, even if performance suffered a bit.
Right now the last holdout is Edge , which is funny because they were the first to ship the older spec, going back as far as IE10 or IE11. Luckily, the next version will be shipping the latest spec.
I've noticed autoprefixer  supports a grid option. I'd love a guide that explained any incompatibilities between the two versions. Getting a mostly working grid on IE11 and Edge would probably make it an easier sell for many.
When flexbox started growing popular there were guides showing you could support most features on IE10 and beyond with little effort if you used autoprefixer. Since there are equivalent versions of most attributes on older browsers, you just had to be aware of bugs and gotchas. For example, the default value for flex-grow, flex-shrink, and flex-basis (and their older equivalents) changed between versions, so if you always had to set them. In addition to any incompatibilities between versions, all implementations were buggy, so you had to be aware of bugs and their workarounds. Luckily, they're all well documented on the flexbugs  repo.
Considering grid is more complicated than flexbox, I wouldn't be surprised to find it has a few bugs. Does anyone know if there's a gridbugs repo that developers can track?
EDIT: After a quick search, I found a blog post  talking about supporting the older version.
wow, you can say display: grid and display: flex in css? and then flex: 1 1 200px;, grid-row-end: span 2;, grid-column: 1 / -1; What browser support these? Like is it 90% or 50%?
It's not completely implausible that, 30 years from now, most of Europe and Asia are connected by hyperloops while the US has built nothing and Internet commentators are arguing that hyperloop is old news compared to yet-unproven teleportation technology, and anyways the population density of the US doesn't support hyperloops.
Doing the same though air travel would add at least a total of 2-3 hours for the whole thing. Don't know about the cost comparison but the user satisfaction is there.
In the Bay Area, the Caltrain commuter train runs from San Jose to San Francisco, through the downtown areas of most major cities in between. It is currently so popular that it is significantly over capacity every day. Yet it is still a constant political battle to get funding to improve the system, even though the Bay Area is one of the most educated and politically liberal parts of the country, where support for public transit is higher than many other places.
Sometimes I think we would have much more transit funding overall if we set aside part of the transit budget to send Americans to other countries on vacation, so they will return knowing how good public transit can actually be.
But last time we talked about this, it seemed to me that if you looked into the details, it was clear why we don't have HSR in the US. Even assuming we built a network that operated at Shanghai Maglev speeds, at the distances the network would need to operate, air travel would remain significantly more economical.
In a thread 3 years ago, I made a list of the top US cities by GDP, and then broke out the crow-flies ground distances between them:
Of the 55 edges on this graph, only 6 were 700 miles apart or less. Several of those are already served by the Acela.
There's a definite advantage to rail over air, in that rail can deposit you right in the middle of the city you're heading to. But that advantage can't make up for the fact that no train is going to compete with a plane for trips between the largest US cities.
My region is small and would be perfect for a light rail system mainly because it's got few people scattered over a wide area with no direct route.
Bombardier even makes rail cars for many countries so it's a home-grown resource we could use.
I think south eastern Canada and north eastern US could have a great interconnected rail system. I'm only 800 km (~500 miles) from Boston but I may as well be on the dark side of the moon.
Like NY city before its subway system people were crowded in the city but when rail was expanded people could live in the suburbs and work in the city. I think a US and Canadian rail system would open up travel and trade on the eastern coasts of each country. Day trips to cities you'd never even think of visiting now or not even capable doing so now in a day.
My perception is that it's a huge money pit for something that's quite frankly inferior to our current infrastructure. We would be much better off improving our current insterstate system.
That said, where it would make sense, like DC-Boston, we definitely should build it out. Build up the cities as the countryside is absorbed (as seen in Japan, and elsewhere in Asia) and let it become viable. Its deployment would definitely affect how cities and other communities grow and also depopulate, so we'd need to anticipate that and prepare for it.
Three things China has going for is vis a vis the US:
-Command economy (gov't can just move things through with little debate, displace 1MM people, if necessary.)
-Costs (in labor, materials, regulation, etc.)
Then you travel to the rest of the developed world, and wow, what a difference in infrastructure.
I mean, yes the US is sparsely populated (in the middle), but isn't it also because it doesn't have fast and easy transport system?
Wouldn't a high speed line between San Fran and Portland develop the very rural regions of Northern California?
High speed train also means high speed cargo transport, isn't that driving some economical development?
These are not rhetorical questions, i seriously have no idea of the answers, but it would be nice to see what experts think about that.
A large number of people are suffering from changes outside of their control, and they are disconnected from those at the forefront of social and technological progress. Many of these people have lost trust in the system, and even in progress itself (outside of progressions that are accessible and affordable such as games or phones). As a result, there is little enthusiasm for investing in major improvements to systems or building any major infrastructure enabling progress.
"High speed rail? What is in it for me? I work part time and can't afford these medications. I want the life I used to have back."
Perhaps a good place to start is understanding the experience of people who are voting for an imagined retrograde society. This can be difficult for those of us who have had the privilege of a better education, or better opportunities in the cities, or even all of our needs met as we build what we build. The privileged must try, and must succeed in understanding what is happening here. This is because the votes of those within what is essentially a ghetto lead to major consequences, including underfunding high speed rail. The result isn't just ridiculously under-qualified and intellectually isolated politicians that are easy to make fun of.
The underprivileged will keep voting in this way until their concerns are answered (or not).
We at the technological forefront know more about what needs to be done in terms of advancing progress, possibly even to the point of solving half of all social problems. However, we must also pay heed to the immediate, harsh reality of the people left behind. Our environment -- natural, political, or infrastructural -- depends on this.
If the ethical demand to listen and react appropriately to the suffering of others does not convince us to strongly act, watching the destructive results of their votes should.
The TSA severely limits its effectiveness, so it could be tempting to build a rail network just to bypass the TSA, but there's no reason to think the same screening procedures won't apply to HSR after the first incident (or just threat).
Melbourne to Sydney is worth doing now, though it's a close thing. But the benefits come as time savings for rich businessmen, and Australia told them that if they really wanted it they could pay for it themselves.
IMHO a high speed railway network is not the start but an evolution of an existing regional rail system/public transportation system that acts as a feeder and communter infrastructure.
The US lacks those public transpotation systems even in mid-size towns. That's a bigger problem IMHO.
i have the feeling that this advantage would be lost in the us, where people live in suburbs way more, and so any trip starts at least with a 45 minutes drive ( not to mention the fact that parking in an airport is probably more convenient than in a city center).
It's not insane. Federal money is never "free" it's taken from the people and always comes with strings attached.
Not building and maintaining highways and bridges shouldn't even be an option. While some upper-class, rich people can afford to live in our cities, they are a huge minority and most people rely on cars and highways. Outside of a couple of cities, good city public transportation simply doesn't exist in the US and won't exist anytime soon.
I think we need to be realistic as to what is possible in the US. High speed inter-city rail isn't possible. And even if it is, can it compete with the price of plane tickets? Doubtful. Giving our cities good public transportation isn't possible. It may have been possible in the past, but not the last few decades. Having room inside a city for all who want to live there most certainly isn't possible. Building roads and bridges has now become almost impossible in many places. I have to wonder what is the plan for the US transportation infrastructure. As far as I can see, the plan is to let it deteriorate until it doesn't exist anymore. At least in that sense, it's consistent with education, social programs, and the rest of our crumbling society.
Anyway, my flawed observation:
Some sort of deadlock where instead of discussing how to improve the situation, the discourse get stuck in debating which is the correct reason for not doing anything, instead of trying to come up with improvements?
Several times arguments are made that a non existent technology will make current investment pointless in the future, so no investment should be made now. Isn't that the argument implied by the title of the article?
Obviously, it could be true that future inventions would make it pointless, but that certainly is not something you can calculate/know off the cuff, if it's even worth speculating about. Building a high speed rail network take long enough time that all those possible avenues can probably be explored in excruciating detail before the first shovel hits the dirt a decade from now if everything moves quickly.
People are sceptical towards hyperloop, which is understandable in many ways. But what if it would work? Wouldn't it be worth investing quite a lot of money simply to figure out if it could work?
Obviously it could potentially only solve a very specific part of the transportation puzzle, but one that could have quite some positive effects.
Positioning cars and aircraft as more-or-less the only viable ways of communication for the foreseeable future sounds like an awfully odd position to me, even for a very sparsely populated country. While, the correct solution might not be high-speed rail, some variation of it it could still be the best solution in several instances.
Maybe someone would come up with something like a tethered electrical ground effect aircraft/train which could take advantage of the sparse population if they knew there were money to be made. Instead of massive resistance and cartloads of red tape ?
Rail is convenient, but it will never ever be as convenient as having a car take you where you want to be, carry your kids, and carry your stuff around. As soon as self-driving technology eliminates the hassles of parking and clean solar-electric tech eliminates the environmental concerns, ridership is going to tank on all the fixed train lines. It might be 10-15 years out, but I would be shocked if any of the investments made today in rail ever pay off.
Roads are a much better, cheaper, faster, flexible option. We just need a 10x revolution in: storage density, fast charging, or efficiency.
America is smart!
The Rail Runner was built in 2006 primarily due to Governor Bill Richardsons efforts. It essentially covers two cities, Albuquerque (~500,000 people) and Santa Fe (the capital, ~70,000 people), which are already connected by Interstate 25.
I love trains; I recently took Amtrak to LA and back. And I love the Rail Runner. Its my favorite way to get to Santa Fe by far. Once I arrive, being without a car is not too bad: Santa Fe is a fairly walkable city, Albuquerque has a decent bus system, and a bicycle (which I can take on the train) makes things a lot easier.
The big problem with the Rail Runner is its cost.
Richardson originally was very vague about the cost, and initial estimates were (it turns out, a wildly optimistic) sub$100 million in initial capital. The state took out a loan to pay for construction. The total cost is now estimated to be about $800 million. Currently the state Department of Transportation pays about $25 million a year on the loan; as currently structured, that will slowly increase to $35 million per year until 2025 and 2026, where the payments jump to $110 million (per year!).
New Mexico is currently in a budget crisis (not just due to the Rail Runner). (http://fortune.com/2016/12/04/new-medico-budget-crisis/) There have been special legislative sessions called this year to sort things out, and theres conflict between all three branches of our state government. I have no idea where the DOT will find $80 million in their budget the next ten years, at least not without serious cuts to our already underfunded highways.
Then there are the operating costs. This is not so bad. Revenues only cover about 10% of operating expenses. But at least the rest is covered (at the moment) by county taxes, federal grants, and payments for use of the track by Amtrak and BNSF.
Ill be cynical: my personal belief is that Richardson intentionally hid the costs and pushed the Rail Runner as a shortterm publicity stunt for his 2008 Presidential run, without a care as to what it would do to the state ten years later. It is very like him. (Dont even get me started on the spaceport.) https://www.abqjournal.com/news/state/602848nm10-16-07.htm
The legislature sponsored a study to determine the feasibility of selling the Rail Runner (https://lintvkrqe.files.wordpress.com/2015/11/final-hm-127-s...). It concluded that nobody would be willing to buy it due to low revenues, high operating costs, and the plethora of exclusivity agreements that would need to be renegotiated (with BNSF, Amtrak, the pueblos, the federal Department of Transportation). And selling it wouldnt help since it wouldnt absolve us of the requirement to pay off the debt. At this point I dont foresee a solution other than refinancing the loan (again) to avoid those $100 million cliff payments, at the cost of further interest payments.
Like I said, I love the Rail Runner, and I really want to see it (or passenger rail of some sort) succeed in New Mexico. I do think the way the Rail Runner was handledintentionally hiding the costs and having no concrete plan to cover operating costsis completely unconscionable.
Not that it has to operate at a profit; after all, highways lose money too. But the Rail Runner loses so much money, and were already a poor state. It is valuable to connect New Mexicos capital with its largest city. I just feel like there has to have been a better way to do it. I hope the proposed train from Las Cruces, NM to El Paso, TX (http://www.lcsun-news.com/story/news/local/2017/06/28/study-...) will learn from the mistakes made with the Rail Runner.
Whew. After all that, Im curious: what successful rail projects have you seen, and what makes them successful?
So instead of "what's wrong with the US" we should ask "what went wrong with CA HSR?".
Some years ago, for a while I was a prof in Ohio. Well, there was a group all hot on connecting all the Rust Belt cities -- Chicago, Detroit, Cleveland, Columbus, Dayton, Cincinnati, Muncie, Akron, Indianapolis, South Bend, Youngstown, Toledo, etc. with passenger trains. They were really hot.
Look, guys, the US had a very good passenger rail network. Could go by train from one tiny crossroads to any other, all by train. And people did that. But soon that whole thing was killed off by, and may I have have the envelope please? Right, the Model T, etc. Private cars. A lot of the tracks grew up with weeds.
After WWII, soon, for trips up to 1000 miles, say with the whole family, people would rather just take the family car. Just after WWII, the passenger trains were still running, but, no thanks, people would rather take the family car, e.g., from Florida all the way to Grandma's near Buffalo, NY. As soon as I got married, my wife and I went to her family farm for Christmas, 900 miles, by car, car packed with stuff. Plane? Train? Bus? No thanks.
Gee, guys, now with the TSA, no way will I want to take a car full of luggage, toys, Christmas presents, etc. past the TSA. No way.
For me, for anything like family travel, public mass transportation, no matter how fast, how roomy, how cheap, how safe, due to the TSA and all the luggage handling problems, lack of privacy, being legally under the thumb of a lot of people, rules, bureaucrats, various cases of police, being subject to being forced to wait in my seat for four hours while whatever is going on, etc., the answer is no, no way, never, don't bother to ask again.
There are a lot of people and projects there in the woodwork eager to come out with lots of publicity, reasons, and excuses and eager to scarf up Federal subsidies. A LOT of people/projects. Clearly there is a whole industry of this stuff. They are always back in the woodwork, and as soon as they smell money, and they are good at smelling money, out they come, big publicity drives, etc.
(i) A single download that includes the article and supplementary information. Some journals are doing this now.
(ii) Articles in epub format - document reflow on my tablet!
(iii) A single download of an entire issue. Particularly desired for Nature Biotechnology.
Some kind of advanced article annotation system with an easy way to add citations + links to other sources would be great.
also: the advertisements and commercials featured hyper-sexualized, semi-nude models who were often regular employees.
it seemed to be an innovative way to run such a business but doing that sort of thing in LA is quite possibly the wrong model nowadays.
a year or two back, the LA Times ran a story about some LA clothing manufacturers leaving the city and relocating to El Paso, TX: http://www.latimes.com/business/la-me-korean-jobber-market-2...
There aren't really any ticketing line problems in Sweden. Anyone can buy their tickets online and have them delivered to their phones.
There's a proposal to remove small objects from low earth orbit by shining an 10KW or so earth-based laser at them for a few hours. This exerts enough light pressure to drop their orbit deeper into the atmosphere, where they slow further, re-enter, and burn up. That's probably the most cost-effective approach proposed so far. Now that solid-state lasers in the 50KW range are available, this looks like a viable option.
The actual implementation is here:
>Ups, du overskred CPU forbrug grnsen
It seems link does have problems. Anybody else does experience same thing?
Not sure how I could easily "make an image" copy of my stack/code and possibly this is where serverless is nice, you just need to throw code/traffic to something.
Again something I have to cross at some point.
And even if they were being used "correctly", software development is still poorer as an industry because of them. As they say, Stallman was right all along. https://www.gnu.org/philosophy/software-patents.en.html
Enfish v Microsoft
> The Supreme Court has suggested that claims purport[ing]to improve the functioning of the computer itself, or improv[ing] an existing technological processmight not succumb to the abstract idea exception.
1. There should be an absolute maximum on the litigation that can be brought against any organization for any reason in a given time period (say, a year), and that total cost should not be able to exceed some tiny fraction of its total operating costs for that period (parent companies included, to avoid hiding actual illegal activities in subsidiaries). In other words, it should be impossible for someone to kill a startup in the crib simply by creating overwhelming lawsuits that are too expensive in time and money to deal with.
2. There should be a very substantial penalty for failing to convict after accusing a someone of a patent violation; something like 10x the legal costs of the party that was accused, and a moratorium on any similar accusations against any party for some period (like 6 months). In other words, slow these trolls down and hit them hard when they fail, and they might not try to make a shady business out of it.
> People talk about healthcare, but in essence what we have right now is not healthcare. Its sick care. Some people see their physician when theyre well, but most people dont because theres not much advice that they can give you other than not to smoke and to exercise and all that.
I feel like I identify with this; I want to have conversations with my doctor all the time about routine advice seeking kind of stuff, but I don't because there's nothing wrong, plus it costs a lot. I would love to have a health care plan where conversations during and about being well were expected and included, but I don't see it coming any time soon.
Will this initiative double the amount of time that we've got the full capacity of our 20s and 30s? Or will it just allow people a prolonged, half-century tour of the twilight years, from 70 to 120?
I remember reading that if you got enough ST you essentially get the same cardiovascular benefits of cardio but I can't find the source.
> There currently is an upper limit, and the upper limit is probably around 115, 120. You have a very large number100 billion people to choose the number of people that have ever livedand you have only one who has made it through to 122, Jeanne Calment. The second oldest was 119. It does seem there is an upper limit. Some people have shown that in the last hundred years, even though we have progressively increased the average lifespan, the number of people who live above 115 has not increased.
I also couldn't help but think that his remarks about immortality being the naughty "I-word" is a roundabout way of addressing some of the excitement stirred by a certain slightly sensationalistic wizardly chap in the aging field.
I'm not entirely sure where the interviewer was going with this statement, though.
> Theres a lot of Silicon-Valley buzz about longevity and many startups working to develop immortality pills.
I've yet to hear of a startup working on an "immortality pill."
 - https://www.buckinstitute.org/
"Selfish" vanity, fear, etc. including accumulation of wealth, power, etc.
A desire to remain with loved ones, including pets, and to keep loved ones around, including celebrities.
A safeguard against unexpected "unfair" death, e.g. getting murdered, assassinated or dying in a freak accident, or terrorist attack etc.
Carrying out long-term plans that take longer than human lifespans.
Participating in projects that take longer than human lifespans and may not be easily restocked with new humans: e.g. traveling interstellar distances in confined spaceships.
- The aforementioned "plans" and "projects" may simply mean the preservation and protection of certain things, ideas, cultures, rituals and records that cannot be automated or archived.
A desire to explore more of the universe than can be done in a mortal lifespan.
And the following possible ways where one or more of the above goals may be achieved:
Cloning. You basically get a new person that may or may not be "as good" as the loved person/pet/celebrity you wanted to preserve.
Repairing/Rejuvenating one body for as long as you can.
Separating the brain from the body and having it remotely control multiple "backup" bodies.
Reincarnation. This of course assumes a "higher" plane of existence where our "true selves" actually live; e.g. this reality being a VR MMO that you can only play for as long as you paid for.