Fast forward a few years - Experience, good programming habits, and the gratuitous use of assertions. Now I spend a negligible amount of time debugging, and it never ceases to amaze me how frequently things just work the first time.
Edit: I guess I want to say that there is one, and only one, bottom line: the relentless, ruthless pursuit of quality. It takes time to develop the good habits and watch for the pitfalls, but once you're there you develop your software products in a quarter as much time, with one tenth the stress, and everyone on your team feels proud of themselves and each other. Then with your free time you can focus on what's really important - your business and your life
1/3 planning 1/6 coding 1/4 component test and early system test 1/4 system test, all components in hand. This differs from conventional scheduling in several important ways:
2. The half of the schedule devoted to debugging of completed code is much larger than normal.
3. The part that is easy to estimate, i.e., coding, is given only one-sixth of the schedule.
For me debugging usually means "figuring out why we had an outage." This means looking at: 1) application logs, 2) server metrics, and 3) source code associated with the failed applications.
I recently had to ssh and run ngrep on 8(!) servers to see how groups of messages passed around and then look at timestamps to correlate what happened. It was very tedious. This could have been saved by better debug logging; we could have switched that on for 2 minutes, run the tests, and and the looked at everything in Logstash.
When this happens, I end up spending a ton of time tracking down errors. On a bad week, this can be half my time.
So to me, debugging is as much thinking about how you'll have to solve errors in the future and planning for it as it is writing unit tests and tweaking code.
I've sunk a lot of time into trying to change this. Among other changes, I've:
- Improved crash dump collection, to spend less time reproducing bugs and be more thorough in addressing them.
- Improved code debuggability - for example, writing scripts to inject call stack information into actionscript and java via disassembly, which I can then display on assert, especially on platforms where I have unreliable or incomplete debuggers.
- Learn and use defensive coding techniques to make bugs fewer in number, shallow, and caught more quickly and with more context.
- Write thorough tests to catch said bugs before I even run my main application, and more edge cases to input
- Learned more tools to catch bugs I might not even know exist - valgrind, address sanitizer, static analyzers, fuzz testers, ...
I spend much less time debugging my own code now. If I'm lucky, I'll work on projects where I don't have to debug my coworker's code all that much either. That still leaves debugging 3rd party libraries and tools - which I may lack the source to entirely - that I suddenly have more free time to really properly investigate and get to the root cause of.
I am working on an existing distributed system with many moving pieces, which is rather prone to outages. This is fintech, so outages mean a lot of money for a lot of people. So my job involves overhauling the existing system, as I upgrade bits of the system slowly: A full rewrite at once would be madness, but I suspect nothing in the current system will remain in two years.
The biggest time sinks are stress testing any of the newer pieces that I try to bring in, followed by incident remediation. There's an incident that requires a human hand to fix it every couple of weeks or so, and I end up spending about three days each time writing better error handling code, adding observability and alerts, and if something is really recurring, writing automation to make the problem fix itself.
This is a fact of life in any distributed system that was written fast: People are start happy because it works most of the time, but as you want the 4th and 5th nine, you need people hardening the system. This is something that is very hard to do as you build anyway: While unit tests are good, there are entire layers of behavior nobody will be able to spec properly by looking one piece at a time, so stress testing, gamedays and such are the only ways to make sure not that the system works to a spec, but that we can even come up with a spec that behaves the way we want in practice.
There's value in evaluating scenarios in your head, but I've also seen what happens when mathematicians use that as their only weapon in a distributed system: Months are spent making sure the system is correct, but then lots of effort is spent on scenarios that are more theoretical than practical, and other scenarios are ignored, even though they occur a lot in practice.
In this respect, it's not very different from entrepreneurship: Getting an MVP out the door and doing things by hand instead of using automation is going to beat spending a lot of time making a product without having any idea of what the market really wants.
When I started, the bulk of my time was debugging my own code. I am gifted with the ability to write vast swathes of code in a short amount of time and when I was younger, it was vast swathes of shit code.
A little later into my career years 3.5-5, I spent more time coding and less time debugging. I designed my code better, used better patterns, and generally was just an all around better engineer.
I've come into the third stage of my career now where I spend a good portion of my time debugging junior engineers' code in a complex system I work on. In particular, my focus is usually in reliability and performance. I don't tend to debug the junior engineer one-off issues but rather the subtle regressions introduced by seemingly harmless changes.
In this third stage, I still write a lot of code, but much more of it is investigative and refinement over existing ideas with occasional injection of something wholly new.
So, they're all bugs, and in a sense, all coding is debugging. New-feature, regression, existing (from previous release), and escalation bugs. They're all basically the same thing: Identify the deficit, write a fix (includes what you might have meant by "debugging"), write tests to cover the changed behavior, check it in, deploy/release.
Bonus points if the project started as one thing and pivoted to something completely different midway through development, and about 50% of the code is completely unused.
Analysis Programming Debugging Overhead ------------ -------- ----------- --------- -------- My Own Stuff 30% 60% 10% 0% Others' Code 50% 10% 30% 10% Enterprise 10% 10% 10% 70%
Honestly, I'm probably better at it than building new features anyway.
Maintaining - if this means soft feature creep, then 10% Maintaining - if this means bug fixes and other things, put in implementation, 5%
I genuinely like this New functionality 20%, including sitting with users for new functionality requests, seeing their workflow
Other stuff. Like filling in timesheets, which assume hours can accurately be attributed to discrete tasks for discrete people any and all of the time.*
* Just set goals for staff. Do staff achieve their goals? If so, why timesheet? Or just timesheet roughly, my hour-by-hour 7 day per week sheet is a pain.
- One project is in active development, and I probably spend about 70-80% of keyboard time coding with 20% debugging.
- A separate project is in maintenance mode, and obviously most of my time on it is debugging as bugs come in. So probably opposite, 70-80% debugging there.
- Sometimes feature extension requests come in, in which case it's probably closer to 50/50 on that project.
But I have the luxury to work in a result-oriented environment with people too experienced to fall for "agile". So I can spend half of my week in a cafe with pen & paper as long as the project is done by Friday night.
There are certainly practices that reduce the amount of debugging, but it's all relative. Personally, the question for me is nearer something like;
> When is the right time to let go of my current approach?
But I am not sure how can I improve myself. I am not sure whether anyone faced this but I feel like the starters really feel the same way.
All fun lies in debugging I sometime love it. I find funny bugs in my code. But it is so time consuming and I really want to reduce that. Not sure how. Any help will be thankful.
60% debugging 15% stack overflow 25% email archives 15% commit logs 10% navigating code, spelunking 12% jira 5% writing tests to confirm config/state/feature availability 3% coding
Usually can get through a chunk of code no problems but when that 1 inevitable bug arises it will take up a lot of time through trial and error, stack overflow and just generally googling to find a solution.
That said, I'd say only about one-third of my time is spent on code. I spend significantly more of my time doing operations work and having meetings.
Most things are several small (<10 line) functions.
The most complex thing I've built is a pre-qualification form. Thank god for moment.js, because I never thought it would be so difficult to calculate if someone is between the ages of 40 and 82 (or will be 40 by October 15th of next year).
That said, about the question asked. I can attest to the fact that in general thinking and writing good tests greatly reduces time spent debugging. Also, adding judicious and useful logging for the critical / edge cases helps a whole lot.
That, of course, is not to say that you can't get something out in two or three months that is functional, if rough around the edges (at least in terms of the code).
If you're really worried about the possibility of not being able to deliver, I would advise you to pay close attention to the terms of the contract if they offer you the gig -- how much freedom do you have on the project, are you accountable for its success (particularly in a financial sense).
You can recommend that the company bring on another developer, perhaps before you sign stuff. Treat it less as a negotiation with an employer and more like a consultation with a client: you want the app to be successful, and another developer would help to ensure that.
If the company has made a promise like this to the customer without even having a development team in place, you can be guaranteed this won't be the only dishonest piece of the puzzle.
Bail and don't look back.
Just be honest about the whole project. Let them know what they are getting into and set a plan from the start. If they decide to keep adding on requirements or changing things around, let them know how much time it will add onto the project.
You may never get the project done and it may be really complicated, but it will be great experience. I have many projects like these that I worked on after college.
Phone libraries won't magically solve any of your issues.
2. I would attempt to drastically reduce the scope. Emphasize the worst-case development time to your client. The client's reaction to this is often very informative. My best clients react by accepting my concerns and scaling back the requirements to the bare minimum. My worst clients have refused to acknowledge serious issues and insisted on charging ahead without changes to the plan.
3. You avoid being taken advantage of by having a better offer. Since you can't do this immediately, accept that you will be taken advantage of until you can clearly signal your value. Put effort into personal projects and other visible demonstrations.
Questions to ask yourself: What are your personal career goals? How would taking this job serve them?
I never really intended for anyone to use it seriously. I made it for myself, but went ahead and posted it online anyway. It is a 1500-line single-file, nearly-commentless, nearly-spaceless abomination of code with no documentation, and an endless list of critical bugs that every user keeps encountering. They have elaborate workarounds for many of these bugs.
It became a negative feedback loop: "Why do we use xkas? Because everything else is written in xkas", and so now even more code was created and written in xkas. And so even though I've since written a proper assembler that's dozens of times nicer, no one can/will use it.
Lately, people have been writing their own versions (in addition to countless forks) that try to offer backward compatibility with all the crazy parsing errors and (mis)features of xkas, like left-to-right evaluation of math expressions, and the most convoluted macro evaluation system you've ever seen (one user proved it was Turing complete and wrote a Brainfuck parser in it.)
It's surreal. I feel terrible that so many people are stuck with this mess, but even I can't stop it anymore :/
I wrote it around 1993 to scratch my own itch, adapting it from a little script called Go by David J. Musliner. At the time I was using a literate programming tool that generated Latex from C code. When I moved on from that job a couple of years later, I released a final version two, which documented that I was no longer planning to maintain it. Since then John Collins took up the cause and been doing a wonderful job from what I can tell from occasional ego searches through google and stackexchange.
Two amusing things: 1) It is the only piece of Perl code I have written, ever, and 2) I had reason to use Latex a few years later, and decided I really dislike the sheer complexity of it all. Despite that, it is surely the most enduring and widely used piece of code that I can claim credit to having a hand in.
What have I done?
Amusingly, a TODO I put is still there:
/* TODO: get rid of this by adding fixed-point support to SampleFormat. * For now, we allocate temporary float buffers to convert the fixed * point samples into something we can feed to the WaveTrack. Allocating * big blocks of data like this isn't a great idea, but it's temporary. */
The oldest is the internal system for a record label written in 1997 and I still occasionally get emails asking how such and such works (and I have little idea, it was in PERL).
Through to code that processes video and audio snippets for most of the UK Football League premium content sites. Authored in 2000 (mostly VBScripts that slice, encode, and distribute media files and the metadata).
Parts of btinternet.com still appear to use my horrible CMS... written in 1999... though gladly it's now very few parts and I suspect these are just cached outputs rather than the CMS still being in production.
Most worryingly would be the UK Home Office, and most UK banks and some heavy manufacturing companies that I wrote project management reporting software for over a decade ago, and as they manage 20 year projects I believe that stuff has at least another decade in production. At least all of these systems are not internet connected (then again, they'll never be updated either).
My code isn't terrible clever or pretty, those requirements got dropped a long time ago. But I have learned to make code that is simple to read, easy to maintain and tweak, and that can sprintf debug with the best of them (debug tools of choice have come and gone in the time my code has been live).
Mean looking image of a parked up one - the red bag covers part of the system:http://www.vaq136.com/misawa/cobra73418-017b.jpg
The reason I'm happy to say this publicly is because
a) it took years to write, full time. Data recovery is a hell of a lot more complex then I'd ever imagined.
b) data recovery is no longer possible on SSD's (if they have TRIM enabled, as they do in all major OS's) so it's a declining market. Anyone would be nuts to be trying to enter it at this point.
Also between 1999 and 2001, I was involved in the LiViD project where I worked on a port to PowerPC. I don't recall any patches actually landing into LiViD because it turned out that the bug was in GCC itself, a bug I was told must have existed since the mid-eighties. I didn't directly write the GCC patch, but did debug the compiler error and worked with the GCC team on the fix. This directly resulted in a port of Xine to PowerPC. (LiViD and Xine are early projects for multimedia and DVD playback on Linux) Xine exists today, but it's unlikely any of my code is in it. While the GCC fix is not my code, the fix itself still endures and exists because of my interaction with the project.
It's a newer example, but code I wrote simply as a demo for the Cairo graphics project back in 2004 became integrated into Tuxpaint and is still used today for rasterizing SVG graphics into stamps.
Nevertheless, my own examples are all non-games. In the early nineties I wrote a program in Turbo Pascal to manage grades and print report cards, I heard one elementary school in my home town was using this until 2012.
On one of my first web programming jobs, I made a cold fusion-like template interpreter and ecommerce engine in C, that was about 1998. One completely online-based company that we launched with this software kept on running on it until it was sold two years ago.
In the early 2000s, my startup produced a web content management system that I wrote most of the code for, and sometimes I still get usage questions about it even today (to be honest, knowing that code is still used in production is not a good feeling).
Oh, I just remembered: around that time I wrote some microcontroller code that went into a certain brand of PA systems for TV studios, I'm pretty sure that's still in use...
My oldest personal project is a chat place where RPG players can meet up and roll dice, I think that launched around 2003 in some incarnation and though it's getting updates and extensions from time to time, the core code is almost unchanged: https://rolz.org
People still have modems???
I built a file maker database, for a school system.
They have tried to retire it twice. Both projects failed miserably.
For the last 10 years it has been maintained by the same office admin. She still calls me every now and again to ask questions.
This was after my first year of college and I really knew nothing about algorithms. I wrote a horrific program in BASIC on a PC that did what I know now is a greedy bin packing solution. It created a least squares metric and tried moving and swapping orders until nothing improved the metric.
I was shocked to hear that 20 years later they were still using that same program.
No, not a BBS door, but an actual door, written in Second Life's LSL (Linden Scripting Language). You'd think making a door should be pretty simple, but as the engine had no 3D primitive (prim) that had an axis on one of its edges, doors were often pretty awkward workarounds, either involving linking the door to a cylinder or worse, rotating and then moving when the door was opened or closed.
This script when dropped into a basic cube prim shapes it into a door, applies a texture and most importantly, cuts the prim in half so that the Z-axis ends up on the side and it can rotate around and act like a door in only one prim (the prim allowance was limited, so this mattered).
The script also has several workarounds for engine funkyness, including one where it automatically moves back into position after every cycle to counteract "drift" - otherwise, due to accumulated floating point error, the doors would slowly drift out of position when opened and closed many times.
I know it is still in use, because Second Life still forwards messages to my account to email, so occasionally I get gems such as this:
> [16:04] distresseddamsel: hi there, i just purchased your wooden slave kennels and i can't get into it. I tried to follow your istructions on how to change the group, but when i edit the door, the option for group is greyed out.
Apparently my doors have been used in all sorts of items...
1e7 copies * 10 hours * 60 minutes * 60 seconds * 30fps * 1280 * 720 ~~ 300 quadrillion
Pretty much everything professional is gone...hell, the only employer of mine that is still in business is Google. When I left them in 2014, about 3% of the code I'd written for them was still in production, and following the rule above, it's silly stuff that nobody ever sees, like https://www.google.com/search?q=deubogpiegpj&tbs=qdr:h that's the no-results page when a tool is selected).
It was also ported to the DotA 2 engine, where it has millions (!) of subscribers.
I believe that both variations are still running, the CF version on Metafilter, and the PHP version was used in B2, which became WordPress. A few years ago, I was asked by the WP team to relicense the code from GPLv2 to GPLv2+ and sure enough, it lives on with relatively few modifications in WordPress: https://github.com/WordPress/WordPress/blob/b1804afeaf07eb97...
Perhaps most notable, is that I wrote it w/o having taken a compilers class or having much (any) understanding of stack-based parsing, but it still lives on, so I guess it was good enough to get the job done.
It was quite amusing to get invoices from my daughter's (born in 1996) school produced by this program on a format I had laid out.
There's still, AFAIK, an easter egg that displays the name of the people that worked on it back in the day, one just needs to open the right form and double-click in the right place.....
The suits also made a decision to open a third call center based on some staffing and cost analysis done by me and my boss. We used two totally different models (and argued a lot about which model was right). The director said, "The conclusions you guys came to were within 1% of each other, so it must be right. We're going to move ahead with the new center." That center is still up and running.
The thing that gave me chills was that no one told me the analysis would be used to make a $25M decision. I thought it was just skylarking and hacked it out with some formulas in a 3-page Excel spreadsheet in about three hours.
I was a noob, and the code is horrible, I even used floating points (doubles) for money :/ so there's accuracy bug that happens for big enough numbers. They apparently still use that program (at least a few years ago they did), and they divide the numbers before they become big enough as a workaround for the bug (but then the reports have to be fixed by hand).
"Compiler, we used to dream of having a compiler""I did me coding on punched cards w'a big rubber band around 'em""Punched cards, you don't know y'er born, all we 'ad when I were a lad were an abacus"."And it were a 5-channel Baudot abacus an all."
p.s. I was a 10 years old boy. (:
Sadly, the educational applications for Apple IIs I wrote in the mid 80's are no longer in use.
It was called the SharePoint PowerPak; it added a few features that "inspired" built-in functionally in later versions. Was primarily written by hacking apart the insides of STS and sending a few emails to some MS folks to get clarification on some undocumented thing that I was interfacing with. Color-coded calendars, categorized content in document libraries, tasks with an "email assignee" workflow, a Windows XP theme for STS, and a few other features. Was actually really proud of that one!
I then worked with another dev on a search add-in for STS called SharePoint PowerSearch. It was clever in that it used the little-known "_search" target for <a> elements that, basically, opened the link into a sidebar frame that IE had back in the day. The mom-and-pop business had that installed too.
Those tools, along with a community site I ran, blossomed into the SharePoint training and consulting company I own today.
So that's two 14-year old products still in use at a small business in the Philadelphia area. Made me smile to hear about it.
Anyone have servers running 24x7 with hard drives older that that?
Nobody uses it now apart from me, but I still maintain it.
http://dzigi.itgo.comhttp://dzigi.itgo.com/o_autoru.htm <--about me page with my pic as a kid haha
Had another call from someone last year saying "x is broke" and... piecing stuff together, I realized they were still using something built in 2002, and using it 13 years later (now 14 years). Trawling through old code was weird - a mix of pride - much of it was pretty readable and understandable - and regret - many 'cut corners' I wish'd I'd not cut now, as it made the fixes take longer.
That said, I think there's something very useful about having to deal with your own code 5-10 years after the fact. You'll have a greater appreciation for why 'the right way' is what it is, and ime, I've found that code tends to be more maintainable and understandable by others when it's been written by someone who's had to maintain old code themselves (usually their own).
Doesn't mean newer/younger folks can't write good maintainable code, but it's a skill that seems to come with age more than anything else.
I released the first content management system for websites back in about 1995. It was a fully database driven system with different post types, much like WordPress in design, and editable page content.
About 15 years ago, I wrote an entire web-based system for managing an online grocery delivery company. The whole site runs off of version 2 of that content management system, which had lots of the version 1 code in it, which is now 21 years old.
And yes, they are still running their whole business off of that unmodified system today! I should have put them onto a monthly maintenance plan. ;-)
Sadly, I hear that they will be finally replacing it soon, putting an end to that legacy code.
Screen shots of the tube sheet mapping application are in that PDF. The system was eventually featured in Visual Basic Programmer's Journal in September of 1999. I was 24 at the time and I felt like I had really accomplished something!
The students would forget to tell the student organisation when they moved or got a new phone number, but they'd always remembered to inform the university.
Importing data meant getting an email, containing a CSV file, parse the data and lookup the students in our own database and update their information. Because I'd just setup the qmail mail server, and learning about .qmail files and mail queues in general, I just hooked the script into the .qmail file. I figured it would save me the type of dealing with the IMAP server.
I think it was six years later, maybe a little more, when I got a call from someone that the student organisation, asking if I could explain how the student address information was kept up to date. They knew that the data came from the university administration, but it just sort of disappeared into the belly of the mail server.
 https://github.com/slburson/misc-extensions -- specifically 'gmap' and 'new-let'
I'm still amazed at how lousy http client libraries are in so many environments.
One bit that I hope is gone is the ONC RPC stuff I wrote for talking to the credit card processing engine we were using back then. That was pretty ugly. It was my first programming job, I'd never done RPC before, and I hacked up something pretty kludgy to make it all work. Not my proudest moment. :-(
Amazed and impressed that clever people are using and maintaining this at https://github.com/massemanet/distel.
(I had hoped the answer would be the Java arcade game I wrote in ~1997, Araknoid, but that seems to have disappeared from the internets.)
I also don't count it as code, but the Pinball Expo 1994 website I put together before that (in Notepad, natch) is apparently still up and running at Linkping University in Sweden:
I still have the conversation (from 2000) with Daniel Robbins, founder of the Gentoo Linux distribution, where we discussed the name collision. There was no issue, we just both agreed to peacefully co-exist. :)
Very cool, since I learned that he, like me, came from an Amiga background. Although he out-cooled me by a laaarge margin, having worked at semi-legendary game company Psygnosis.
The first version was released in 2001.
15 years later I still work on it and release new versions. Because people still use FTP.
I released it to a private group of people, it has been kept a secret within that group mostly and has since been extended to include support for many different services and integrated into complex automated systems which automatically rip, package and release new music as it is released and also old music as it is added to the request feature of the site. I released a much better pure bash version when I was 15ish I think, but many people still use the abomination of a userscript which writes a bash script as far as I am aware.
And apparently they didn't delete the shit because (a) they're lazy and (b) people just skip over it assuming it's a browser error instead of typing "INDEX" in their address bar. So, they left it in there as a successful example in a demo set. I got a glowing recommendation, too, since I showed the original ("bleeding-edge, web tech!") to the person in charge who also thought the demo glitched. Funny all around lol.
Note: This was a bullshit project rather than paid or important work. Not what I'd do if someone paid me for quality work haha.
I moved on to a company in the financial sector and my employer acquired the rights to use this software (a decision I had nothing to do with). We ended-up using it as a way to embed customer-specific customisation scripts into a cloud-based product - something that it had never been designed to do, but which ultimately worked quite well.
A couple of weeks ago I finally replaced it with a much smaller and more focussed body of C# code, 16+ years after it was initially developed.
Second one dates back to 2008. It's a webapp written in Django for real estate agents. Full of bugs, requiring careful input of data in order not to mess up the listing. I went on a 3 day hackaton somewhere back in 2011 to fix all the bugs for free and got some very angry calls, because things weren't working like they used to before. Turns out people were aware of these bugs and made their own workarounds. I was forced to roll back all my changes and backport just a single change - the ability to reorder images by using drag and drop.
It was the first thing I wrote and I'm still insanely proud of it.
It's chugging along sending 5-6 reminder emails every day and I use it once a month or so.
One German engineer later noticed that it contained my copyright, located my address and subsidized me every time the system was commercially sold up until 2008. Not sure if it is still in use, but I am thankful for his sense of justice.
The oldest code that I distributed - it was written by a friend of mine - is Whois Web Professional which was a whois client written in VB6. I know it's still in use because I have to maintain the version file on my website and even keep my server running Apache because its update checking was written by a 12 year old in WinSock and doesn't handle the protocol very well. If I change anything, the VB6 program crashes on startup and I get emails about it. You can see it at https://rietta.com/whoisweb/.
I wrote most of the code for that (in MacroMind Director) in 1995 (I remember that because we needed to change the way we did a bunch of Audio when Windows 95 came out in about September that year - in Win3.3 we could do multi channel audio - but in Win95 we on;y had the two stereo channels so we had to mix a bunch of multi channel sound - we came so close to blowing the size of a CD ROM after that!)
It's a couple thousand lines, and nobody wants to touch it with a ten foot pole, but it also has no known bugs, and will live for the remaining lifetime of the product.
Hell, I'm quite sure that some of the first software I wrote is still used in SAP despite me never having worked for them.
So I wrote a disgusting (yet fluent!) enum-based monster called JILT (Job Information Language Template), that required that every existing .jil file be rewritten as a statically-typed java enum which could then be used to spit out the entire tree of validated .jil files. The nice thing was that your job dependencies were now enforced by the compiler and could be included in CI. But the actual inner workings were a hot mess. And of course, now the support teams have two systems to maintain and ensure they remain in sync.
This was at Goldman Sachs and I hear it's still under active development by the support teams...
Platform: Lotus Notes / Domino.
It is an algorithm to produce non predictable numeric codes of variable length. A much improved version was used for marketing campaigns.
The algorithm uses a Feistel Network operating over an arbitrary block size. It is basically a simmetric encription function of variable length.
I do know there are plugins I wrote for ACDSee as a coop student in 2000. I can still see some of the icons I drew.
Surprised to see it's still being downloaded actively, supposedly by Linux distributions that include it still for some reason.
Was tired of Aumix back then being buggy, so I wrote my own replacement. I think my biggest wtf moment was this being included in the FreeBSD project (at least 4.0), and many Linux distributions also, so it's now out there forever.
At the time, JASS (Blizzard's scripting syntax) was largely procedural, so this was some brute-force work that was a real bear to make usable across different mods.
Still playable, though, and still enjoyable.
"I suspect that the oldest untouched code (of mine) in the interpreter is the code for balancing a symbol table. This is going to be early 1982 and is untouched since it was written. It implements a technique from Colin Day and published in the Computer Journal in 1976."
Dyalog's APL interpreter is still profitable and actively developed. Many of the source files begin with:
/*Copyright (c) 1982 Dyadic Systems Limited*/
The script is an expansion on the web server one-liner: https://github.com/imnotpete/odds-and-ends/blob/master/pytho...
edit: the one liner: http://www.garyrobinson.net/2004/03/one_line_python.html
And of course, reading that, I see my script is hugely overkill since all it does is alter the default port and allow you to pass one in -- something the one-liner can apparently do. Oh well.
The oldest that I'm still supporting is VB6/ASP code written in 2006, though the entire software is older than that. It's for "contingent workforce management" (temp labor) and handles the 3-way relationship between client, the supplier, and the worker. In it's heyday we were processing over $2B in payroll annually. It's now down to one client using it. The relationship is very weird too: it's me->hosting company->business company->end client. But I only have to do 1-2 days of support a month and have a pretty good contract for it.
All of the code I wrote earlier has long since been abandoned, I hope. There is the CBT application I worked on for BP that was for safety training operations for oil extraction in the Gulf of Mexico; and I -really- hope that it's no longer in use.
All the employees are more on PHP than ruby. It was sold to 3 school and 6 years after. It's still in use. I'm out of the company and suggested to move it to PHP for the developers to easily manipulate and maintain the app.
I went out of the company due to increase in task/ duties that I can't handle them all (customer service/ programming/ server maintenance/ explaining to my boss that I need another ruby programmer to help me out)
We basically rewrote the whole system is PHP+MySQL (front end and back office). It was my first time writing PHP professionally and it still seems to be working :)
Earlier than that, I wrote an Access database to manage claims for a small insurance company circa 2009. Except I wrote it in Access 98 and the last time I saw it the file was corrupted. I offered to help fix it but I was kicked out of the building for flirting with the claims department manager (actually a close friend of mine). This one may actually no longer be in use because I never split the UI and DB and it's not supposed to handle multiple users at the same time if you don't, IIRC.
Quite a few hits on Github: https://github.com/search?q=exocortex+dsp&ref=searchresults&...
Instead, I coded a small utility hidden within the program to show the image, and allow to record with a single keypress. It is precise enough that you don't have to cut it afterwards, and it auto-saves to the right file name.
To this day, I didn't find a better way to record lots of small sounds, and since it was written in C# and XNA (which has been killed since then), I keep a Windows computer around just to run it, when I need it, about one or two times a year.
There is little incentive to rewrite it until it stops working altogether, since the use is so infrequent, but each time it saves me several hours of boring work.
A few times I've started a game programming job and found my code there already.
Recently someone ported it to C# where it can live on in Unity games.
A simple and straighforward UI to use synergy on Linux for the most common use cases.
I haven't properly updated it in years (made some small changes last year though) but I know it's still in use e.g. my wife was surprised to learn it's quite popular at her company.
I kept a copy of Abramowitz and Stegun open on the desk for months... I don't think REDUCE is all that widely used nowadays, but it does still work, very well in some respects. It was closed-source at the time and was open sourced relatively recently.
I'll echo the sentiment that most of the commercial stuff I've worked on has either died or never launched at all.
I fully expected the thing to crash in a few weeks after I left and they would move on to something else, but no 8 years later and I'm told it is still running strong.
I had a hand-coded Geocities page (actually, 2 of them) in about 1995 and 1996. Should still be accessible in the Archive, or something.
This became first available with Apache Nutch 1.0. Although it was my second contribution to an existing open source project, it was my first contribution that added a new feature. As far as I know, this feature is still in use for crawling intranet web pages.
But I wouldn't even mind that if I wouldn't consider the code quality of that thing to be horrible. When I wrote that I had enough programming knowledge to tackle big projects but I still lacked the years in which I experienced how and where (my) legacy code will end up. I also didn't put too much care into things like keeping the namespace clean.
Some functionality: inter-channel-communication, inter-server-channel communication, offline messaging, reading RSS, working as ChanServ by commands (you can send him priv msg or message on a channel to promote you or someone else, kick or ban someone). Collecting messages are creating graphs about activity, per hour, per user or channel (so could easily see who abuses the most after 3AM), per day of week... I stopped writing in Ruby for a few years, when I get back it was Ruby1.9 and a lot of syntax changed. I know one person is still using it.
It was also my first ever application (other than scripts from tutorials), I wrote it when I was 15.
My VIC-20 and Commodore 64 versions were better, but those came later
1999. DynamicIP tracking script from the dial-up days. Tiny but it's been telling me where my home internet is for 17 years.
I had to do it because my mobile operator used to send unsolicited mms that kept my old android phone (ZTE Blade) in a wake lock, draining battery. I still don't have mobile data connection to this day!
So I bet I win the number of copies still being used by nearly 20 year old code among the comments here... probably something between a quarter to half a billion of those laptops still in use...
It has been running in various incarnations on salvaged old machines and for the past 3 years in DOSBox untill january this year.
Still proud of that one.
Although I made "only" 1000 guilders (that's about $750 in 2016 dollars) on that, it seemed like a huge amount of money back then. :')
The code I wrote that will be running the longest is code I've contribute to Emacs. User-facing features are unlikely to be removed anytime soon.
From what I can see of the screen shots it looks like at least the core of that software is still in use.
I suspect all the stuff I wrote back in the late 80's and early 90's has long been switched off at some point, which is kinda sad. It would be hard to run today: EBCDIC mainframe stuff and 16 bit PC stuff.
I got my wisdom teeth taken out when I was in high school, and coded this in a weekend while eating Percocet and tapioca pudding. I had no idea it was even still for download. It actually works very well for what it is (an interactive sheet of paper for use with a book)...still in use? Probably not, but I am amazed that it is still there.
 https://pypi.python.org/pypi/transdate/1.1.1 (the initial version was released in the other form)
I have a Python/PyQt replacement, but it requires some more testing and development.
Both my brother's site (1999) and an ex-gf's art portfolio (2002?) are for the most part unchanged in all this time.
They don't look amazing but they're still...decent.
Some sites I made for a school I went to are still up since 1997, but I don't think they're directly linkable from the main page anymore, just hosted and indexed by google...
I'm very amused that even today people use my macro and thank me for writing it.
As far as actual programs, I have a perl script I use almost daily that I wrote from 1999. It's basically an easier to use `xargs` called `map`.
To this day, new players are greeted by an NPC that I wrote when first logging into the game.
I got my wisdom teeth taken out when I was in high school, and coded this in a weekend while eating Percocet and tapioca pudding. I had no idea it was even still for download! It doesn't matter now, because I believe the Magnamund books are being re-released in a more modern, integrated package...but there it is!
Unimaginatively named "Music Player", it remains the only music library organizer that I've used so far that completely ignores all the ID3 tags and respects the way I manually organized the files into folders (artists) and subfolders (albums). That's supposed to be a feature, not a limitation. Back then I had a lot of pirated MP3s with horribly inconsistent ID3 tags ;)
Maybe in the 90s, I might have actually built a linked list in C for whatever reason, and I guess you could define graph traversal to include walking a tree pulled out of a database. But the rest are things that The Universe Provides For You(tm), which one would only ever reproduce in School, a Job Interview, or a Poorly Chosen Hobby.
As to actual rites of passage? Ship Something that real people actually use. Build an entire thing, be it a piece of desktop software, video game, web application, mobile app, etc. from bottom to top and send it out in to the world fully formed. That, in my mind, is what we're here for.
We could totally make career bingo outnof this...
1. your own MVC framework You should do this to appreciate why developers of other frameworks make the decisions they do. I gained so much wisdom from this.
2. Parsing HTML with regex (see here - http://blog.codinghorror.com/regular-expressions-now-you-hav...)
Seriously, just DONT DO IT(tm) -- but if you do, you will eventually learn why you don't want to do it this way, and you might get pretty good at regex expressions
3. Your first mobile app published to the app storePublishing apps to the Apple app store has given me a deeper appreciation for paying attention to the little details. Also, making native apps is a completely different paradigm than web apps because shipping code with logic errors has such a high cost and delay to fixing them.
4. Port an existing library to a new languageI long time ago I ported a recipe parsers from Ruby to Python for a paid gig. It was such a good learning experience because I had a perfectly functioning reference implementation, which allowed me to go deep on getting the details right.
I had to replicate test cases, documentation, scaffolding, and the code itself while being aware of the gotchas of Python.
Great idea, better than mine, but the code was awful, had three global state variables X, XX, and XXX ... plus U, UU, and UUU for the UFOs he'd added to make the game more fun ^_^.
I helped him reduce the complexity by removing the UFOs (it was still quite challenging enough), and getting it to work in general. This prepared me for the many future jobs I took working on the code bases of others (one of which, for example, taught me red-black trees for real), and which soon enough led to the extremes of software archaeology when you can't even ask anyone about the code.
Not that you necessarily want to seek out such work, it's hard and often thankless, but at its best it's also what paying down technical debt is about. And code you've written long ago can also be rather foreign when you come back to it....
My "hello world" for learning new languages and platforms. Integer based or floating point. 2d or 3d. Sound effects, sprite animation, physics, procedural particle systems, global leaderboards, digital skins and so on ad infinitum. Allows you to experience nuances in packaging and deploying WebGL vs Android vs Steam. Continue polishing it, and you may end up with something fun that others will love!
Can also be refactored into a full Tron Light Cycle style simulation. Which is a great way to learn AI. Good luck!
Every developer should also implement all the basic data structures and algorithms -- and then never write their own again! The process, however, definitely improves your chops, helps you understand the trade-offs inherent in choosing among data structures and algorithms, and gives you an appreciation for what's going on "under the hood."
Writing a shell in C (you can use Readline if you don't care to learn much about parsing) teaches you lots of stuff about the operating system you're working on.
My favourite was a small company that told me they 'like to get girls in for interviews' on the phone and then I stupidly still went to a face to face (I was desperate) and I got rejected - the reason was I used a loop to implement something but didn't suggest I could copy and paste the loop innards 10 times, and therefore was 'holding things back'.
* Write malloc() and free().
* Write a gzip decompressor.
* Write a triangle rasterizer and use it to draw a spinning cube on the screen.
Each of those is a decent but manageable amount of work for a new dev and will teach you a variety of useful low-level skills.
Seems to me that the best way to test your well-rounded skills as a programmer is to build and launch a product. Even if you don't aim to be an entrepreneur, the holes you find while taking an idea from inception to launch are much bigger holes than you'd find building this tree vs that tree.
However, in terms of projects popularly considered to be commonly implemented by newer programmers that do hold benefit, in network programming, I would say a traceroute implementation. Server-side, I'd say any multi-node cluster system, preferably diskless. Any embedded system. An RDBMS system. A NoSQL system. An open source intelligence system. Any computational linguistic system. Any i18n/l10n heavy project.
Among these, do the ray tracer and neural network seem way easier than the competition to me due to my math background or are they just weirdly chosen?Those two are essentially algorithms, not "projects".
You won't often see a big project structured this way, but it is very effective for a first hand experience of data driven design.
1. Build your own CMS
2. Build your own framework
3. Performance tune SOMETHING intensely so that you can observe the bottlenecks and their causes across the stack
Just off the cuff there.
Some kind of game.
Edit: Something that uses an API.
Hurry up and solve the user identification problem, HN.
In order to get started with neural networks, begin with drawing simple neural nets for basic operations like addition, multiplication, XOR. Just represent boolean tables as neural networks.
Once you can do that, move on to implementing the algorithm yourself. A simple 3 layer network is enough to understand how the concept works. 4/2/2 nodes is plenty. Just understand how the calculations work.
Then move on to a framework - only after you understood the math. The machine learning course on coursera by Andrew Ng(?) explains the algorithms.
2. mnist hand written digit recognition
Here is the code..
How to Start a Startup: startupclass.samaltman.com Watch the whole series, or you might regret it
Lean Startup:http://theleanstartup.com/ Get this book. If you're not a reader become one. Don't skim it; read it.
Founder's Dillemma:http://www.amazon.com/Founders-Dilemmas-Anticipating-Foundat... Read the whole book before you go get married to a cofounder
ask yourself if that is what you really want. YCombinator is close.
Then don't roll your own crypto.
Textsecure had SMS support and there are forks adding websocket support.
My suggestion would be to work on untangling TextSecure from Google.
I also don't see why you shouldn't be allowed to share where you cross posted this to.
This will just confuse the user and limit adoption in my opinion.
Not sure about the quality of security, but it's distributed:
https://ring.cx/en - Ring is a secure and distributed voice, video and chat communication platform that requires no centralized server and leaves the power of privacy in the hands of the user.
I "hustled" few professors and got some face time around 2001 time frame. But I was out of undergrad (not high school as you requested). These are the things I recommend.
0) Have clear goals on what you want from the professor (and what you can offer too). I showed interest and competence in doing research and I wanted funding from them.
1) I worked extensively in couple of areas (Genetic Algorithms, Simulated Annealing, Finite Element Methods, Optimization of Manufacturing Processes) and published conference and journal papers before requesting a meeting. I am not suggesting you do this. But you need to show you have done some work to warrant face time with professors.
2) After you have done 0 and 1, email them, call them and somehow "stalk" them "respectfully". To meet the professor I did Master's thesis, I wrote to him few times and called him. But finally, I waited in front of his office few hours every day and finally got a chance to meet him. I eventually got funding and completed my degree with him.
3) Alternately, if you can attend some conference (figure out a way not to pay but attend the conference) and meet them there. That's a big plus, IMO.
4) Write to the professor's PhD students and start communicating with them. Learn from them and do your research and build some projects/ideas and use it to meet the professor. This is a "back door" entry in to professor's lab. But PhD students are quite helpful and can guide you in to building a "portfolio" of projects.
If that is not possible, just write an email and suggest you can talk further on the phone or in person. Just ask and if they say no, go to the next.
Not clear what you are looking for.
Any method of reaching out is good, but if class is in session profs usually have posted office hours (hours where you can visit them and talk about the class). In my experience (long ago) they tend not to get a lot of visitors, so if you can, doing it in person might not be a bad idea. Of course being in high-school limits the time available to do this..
Kind of hard to say unless you're more specific with what you want to be doing.
Generally, try to find someone to make an introduction on your behalf. That will go over a lot better, IMO.
I get a sense that senior devs who complain about age discrimination are the ones who haven't kept on learning, and haven't been keeping their skills up to date.
I could be wrong, but I have no problems finding opportunities. I see others my age or even much older who exhibit enthusiasm and a youthful wonder and they seem to be swimming in opportunities.
As to whether or not age discrimination is an issue... if it is, I haven't really noticed it as such.
My career just keeps getting better. It's not hard to stay up to date if you stay interested and engaged. And if you're interested and engaged, you find yourself not really aging in the way a lot of people do.
Another nice benefit of aging that's helped my career is wisdom earned from experience. I started at AOL in '98 when I was 16 (it was cool then) and have worked in the industry since. There's not much in the way of team dynamics or project mishaps I haven't seen, and that gives me clarity and calm during crisis, which tends to make people want to put me in charge.
I've also learned a lot about interpersonal relationships, confidence and finding mutual benefit. While those aren't typically skills most people associate with staying relevant as an engineer, they are skills that put you in positions to succeed in any walk of life.
I'd say the problem for some veteran developers isn't aging itself, it's getting into a mentality of aging. As long as you embrace the new in technology instead of clinging to old methodologies, you'll be fine. I know developers much younger than me that gripe about new frameworks and libraries like it's a chore to learn. It's still as exciting to me today as the 2400 baud modem I got from my parents in '92.
IIRC the opinion was that you had to be in the "in crowd" to both post and to get promotion.
I don't know of any case where a mainframe couldn't, in principle, be replaced by a server farm. However, maintaining a sufficiently reliable and secure server farm is a specialist job too so the benefits aren't always clear cut.
Ease of migration is really down to what the mainframe is doing and for how long. In practice, they tend to be in place for a number of years in hub-type roles e.g. general ledgers, payment systems. That sort of thing. The effect is that extricating them is extremely complex and expensive.
However, these issues aren't really limited to mainframes. We have the same problems with trying to extricate Sparc Solaris applications. We generally want to replace them with Lintel as we use them in most applications. But doing so is horribly complicated as they're embedded in byzantine flows that few understand and the systems often assume a level of hardware reliability that commodity boxes don't have.
Oh god, no, don't even think about it. It's hard enough migrating away from a code base in Java, written 10 years ago and that runs on pretty much anything. Imagine how hard it is to migrate from a code base written in languages known best by engineers in retirement, that only runs on extremely expensive supercomputers made by almost a single company (IBM).
But the worse thing is that you can't just pull the guts out of the software that runs all the transactions of huge financial organisations. You can't just throw a switch and stop all financial activity in a big bank, while you swap a new system in place, because every second of downtime costs millions. Millions! Who is going to take the responsibility for that?
So change never happens because nobody dares make it. One day mainframes will outlive the last people who can program them competently and then we'll all be in trouble.
I wonder how much of my 1986 code is still in use.
I remember at one point he was in a meeting with a bunch of people and the CEO had made a suggestion and he told the guy it was the stupidest idea he had ever heard.
- Saves files automatically and saves revisions
- Allows diffs between version
- Allows rollback or patches
- Include event information like unit test results or git actions at each version
I don't think this is available as a CLI tool but when you are using an IDE anyway this might me a solution for you.
$ du -sch `which git` 2,0M/opt/local/bin/git 2,0Mtotal
Edit: holy mother of god the philosophy page is so scarily close to my thoughts, very happy to see this happen, I kept them in my head you have implemented it!
Does collapse comments as well as other nifty features.
i.e. - How long can an organization avoid improving itself on the basis of reducing overall risk?
I wrote a userscript that is simple and works well.
One thing is that it obviously don't help on mobile.
On the other, the experience is way better lately.
Do you remember the "expired link"?
If you have GreaseMonkey you there's a userscript that also works the same way; http://userscripts-mirror.org/scripts/review/288192
 - http://nvd3.org/examples/pie.html
Very clean and very good.
 - http://www.highcharts.com/
1. The Web Application Hacker's Handbook
Probably the first book you want to read; this will teach you the core mindset you need for finding security flaws in web applications as well as give a very strong foundation for the different classes of vulnerabilities.
2. The Mobile Application Hacker's Handbook
Good supplement to #1 for application security, obviously focused on mobile apps.
3. The Art of Software Security Assessment
The bible of the security industry. Especially instructive for source code review.
4. Security Engineering (Ron Anderson)
Supplements #3. Very instructive for injecting security into the overall SDLC and designing secure software.
5. The Tangled Web
Excellent historical background and good high level overview of many information security topics. Every engineer should read this, even if they don't work in security.
6. Gray Hat Python
Very hands on, good introduction to aspects of reverse engineering and the typical work an e.g. security consultant will do at a top firm.
7. Practical Malware Analysis
Very good introduction to malware analysis.
8. Practical Reverse Engineering
This book, along with #9 will teach you everything you need to know to effectively reverse engineer software for security-focused analysis.
9. Reversing: Secrets of Reverse Engineering
10. The IDA Pro Book
You'll want this if you have any plan to work with IDA Pro at all, which is the gold standard for decompiling and reversing software.
11. The Shellcoder's Handbook
If you'd like to write exploits after you're done reversing software to find an exploitable bug, this is a good book to pick up.
12. Cryptography Engineering
Very solid and broad introduction to cryptography. Every engineer should read this, even if they don't work in security.
13. Introduction to Modern Cryptography
This book, along with #14 is what you want to read if you're going to work as a cryptographer or cryptanalyst professionally.
14. Handbook of Applied Cryptography
Theoretically, these books should resolve your known-unknowns and your unknown-unknowns. Anyone who reads and works through the list should be capable of designing secure software, finding errors in white and black box source code reviews and finding errors in white and black box penetration tests.
If you're looking to get into this professionally, feel free to contact me if you have any questions and I'll do my best to help.
Ad revenues drop year after year. I think it's a dying business in its current form.
I think an API should be available for advertising. The API would provide the name of the advertisement, the description, colour scheme, image, and more. Then, developers can use that information to work the advertisement into their site in an attractive and seamless manner.
An example is above. Instead of having a leaderboard advertisement for Chrome, HN could get the advertisement details, and insert it into the site to resemble a post. Every site could then style the advertisement their own way.
I think it's less annoying to users, fast to load, users are more likely to read and click it in comparison to a banner they're blind towards, advertisers would get more for their money, and publishers don't need to destroy their site with gaudy flashing boxes that everyone ignores. It's a win-win-win situation, and my vision for the future of advertising.
Personally, I'm looking to drop AdSense in the near future and try alternative revenue streams. I worked hard to develop my community. I don't like pushing some ugly, irrelevant, tracking filled ads on my users, in exchange for a few cents on every thousand impressions. It's a business that needs fixing, because advertisers are getting ripped off, users are bending over or forced to install ad blockers, and publishers need to keep pushing more and more advertisements to make the same revenue they earned last year. I'm surprised Google hasn't been more innovative in this space considering it's their bread and butter.
At some point, you'll need to develop commercial agreements with other corporations. At that point, you'll probably need to have been commercialized.
Also, Amazon already has open source activity around Alexa: https://github.com/amzn/alexa-avs-raspberry-pi
tl;dr: Just decide and get started.
Especially, if it won't be English-only or, even better, could be taught other languages by a skilled end-user.
> Does the opposite exist?
> If research does exist, are these studies independently funded or are they funded by the companies of these techonologies?
I think that educational technology is still at a point of "throw it at the wall and see if it sticks". I also think a lot of technology is being deployed without adequate professional development for teachers.
They have the technology, but haven't been gotten the proper level of training nor had the proper resources to develop effective curriculum using technology.
Thus far, effective programs always seem to have a "rock star" teacher/instructor at the center, going the many extra miles needed to establish gravitas.
I haven't looked at all of them, but you should be able to see some data/research by the participants supporting their educational technology.
Also, voting is open to the public until 2016-05-24 @ Noon.
SRI reviewed the research on our impact with fairly positive findings.
 - detail on the approach in the International Journal of Artificial Intelligence in Education: http://ijaied.org/pub/1368/
 - https://www.sri.com/work/publications/strength-research-reas...
Schools suffer from bureaucracy, not a lack of tech.
However, the question seems to contain an ambiguity. Is your project ultimately focused on the needs of businesses or individual users?
My random internet person's advice, forget about sign ups. Forget about pre-launch. Build something basic and launch it to one person. Talk to that person. Launch the next iteration to two people. Iterate and launch to four. Keep going until you know it works or it doesn't.
It certainly may be the case that they actually are interested in seeing interviewees past such points in a somewhat controlled environment.
Such "gauntlet" interviews seem common among large technology companies and while I think some of it is a semi-intentional attempt to push interviewers to limits to see how they react, I think it's also just as much "everybody else does it" and that continual momentum of existing interview processes. Which is precisely how such hazing processes in organizations get normalized and their problems ignored over the long term. (Yes, pushing someone to the limits of their productivity is strenuous activity and the very definition of hazing.)
But here's what I don't understand at all...
Has anyone ever been sailing? On yachts we have radios, but these radios are designed assuming the person operating it might be completely ignorant.
So we have this literal red button, you lift a flap, hold down the button and the radio sends an SOS with complete GPS coordinates and boat name on Channel 16. Then it leaves it on channel 16 so you can describe the emergency.
So back to smartphones, on smartphones we have dialer apps, these apps know when you dial 911. Why in holy heck don't they have a big red button on-screen which when pushed sends your current GPS coordinates USING VOICE over the open line?
Here's what we need to do that:
- Dialer app. CHECK.
- GPS. CHECK.
- Some kind of UI. CHECK.
- Text to voice system. CHECK.
We have all of the components to roll out a system TODAY which tells 911 via voice where you are calling from. It would almost be free, but we haven't, and nobody is suggesting it.
Everyone is talking about these crazy complicated standards that will, best case, be available in 2021 and cost ungodly amounts. I am talking about using voice which the operator themselves can type in.
Am I mad here? Why isn't this a thing? Why doesn't the dialer even DISPLAY GPS coordinates when you dial 911?
Seriously I bet if someone made this a big deal we could get Apple and Google to sign on almost immediately and this would be available within a year. All smartphones already have all of the prerequisites to do this!
She entered scattershot information from a frantic caller, and using a combination of keyboard shortcuts, foot pedals (yes, foot pedals), and stand-up-and-hand-signal-to-a-colleague-while-keying-a-mic, dispatched an ambulance within seconds. Way faster than the caller would ever realize.
You're not just replacing software/hardware, you've also got to make allowances for the humans in the system. Little things make huge differences in those situations, so throwing the old system out is painful. Evolving the current one (as davismwfl pointed out) is challenging for its own reasons.
Doesn't mean it isn't important. Just hard to do.
First, most pizza apps are just that, an app. That app has access to your phones location data through GPS and some even use WiFi location services. Hence it can send a nearly exact position to where you are standing.
Contrast that to the standard phone network and systems ANI/ALI solution, which still does not (completely) support GPS coordinates at this point. In addition, while there were phases (phase I and phase II) of cell phone location compliance put into place by the federal government, most networks and phone companies lagged far behind in implementation of those standards. On top of that, city and county 911 dispatch centers (PSAP and secondary centers) also have to upgrade their phone and CAD integrations to support better location services.
As for why it isn't being disrupted. Simple, looooooooong sales cycles for an extremely limited market that is vigorously defended by the incumbents. Seriously, it isn't rare for a 2-3 year sales cycle for a lot of 911 components and systems. 18 months is about the normal when it involves critical systems with 12 months being probably the fastest you see anything change. Not to mention, the partners you need involved to make a solution work and be palatable to the 911 centers are the exact same companies who want to keep you out of their market, so it isn't easy. Not impossible, just not very probable without seriously deep pockets to support what would likely be a 3-5 year development to first sale. It makes selling to enterprises look like a fast process and cake walk.
Granted, it could rely on the software running on your phone to add information for the dispatcher, but not everyone has one, and not everyone who does has the same kind. And that's a good thing!
So it takes standardization, and government regulation. Standardization can work pretty well when there's a nice tight feedback loop with customers who are interested in the results (web browsers, for instance). How many telephone customers would switch phones based on the details of how well the phone supports 911-related features? It's not like we can test them without actually calling 911.
Government regulation can also work, provided you're willing to pay the costs: time and money. Lots of time, and lots of money. In fact, it costs so much for the government to regulate things like this that we end up in this exact situation. Phones have completely changed since the last 911 regulations were updated, requiring telcos to provide location information to 911 when the caller is using a mobile phone. It took years after that regulation was introduced before the telcos were compliant, and before all the local dispatch operations could use the information.
The same would happen today if new regulations were introduced requiring the phone itself to send this information; it would take years for anything to happen. (Though I bet Google and Apple could move faster than the telcos, they've certainly proved to be capable of that.)
And that's all ignoring the inaccuracy of GPS when inside of buildings, the time it takes for the phone to determine the location, etc.
Still, in spite of all of that, now is probably a decent time to start making those changes. It's been long enough since the last updates to the regulations, and new phones are capable enough now, that you'd have a decent chance of getting it done eventually.
- If the user doesn't know the location, and they have given the app permission to read GPS data the assistant operator could check the phone's location data and chime in with the correct address in the conversation.
- If the user is unable to speak, they could send text messages to the assistant operator, who would relay them to the 911 dispatcher.
- If the user is unable to do anything more than pressing a button, and if they've given the app permission for this, the assistant operator could check the messages or other data on the user's phone to try and find out what the issue is (e.g. domestic violence.)
- The user could initiate a video call with the assistant operator who could theoretically be able to more accurately describe certain issues (assuming they are better trained medically) than the user themselves could.
"The 112 Suomi application enables the automatic delivery of the caller's location information to the emergency service dispatcher (in Finland).Continued use of GPS running in the background can dramatically decrease battery life.
By using this application you agree to the following terms and conditions: http://www.digia.com/PageFiles/112 20Suomi/112-Suomi-app-user's-licence-agreement.pdfRegistry extract according to the Personal Information Act: http://www.digia.com/PageFiles/112 Suomi/112-Suomi-app-registry-extract.pdf"
Here there is an app that will send that information to 911, without worrying about giving special permissions at the time of the call. I'm not sure why that isn't standard on cell phones everywhere, especially since phones generally come pre-loaded with apps. I know that landlines in the states send the information to 911, so I would think some people much smarter than I would be able to make an interface to go between the two. I suppose that would take some time, given how well the government seems to work together to get important things done these days.
The phone companies COULD upgrade to newer technology right now, but that would cost money.
They could wait until it is a crisis and then DEMAND that municipal governments pay for the upgrade.
Which would you do?
Not to say you cant make money off it.
(This is not snark, in case you are wondering.)
* Put some effort into the design. Writing plain HTML while also indicating you have run a website and have front-end experience might seem contradictory to a potential employer. A nice font, simple layout and basic responsiveness would go a long way.
* A startup exit, even if small, is a considerable achievement. Consider adding "Acquired by $COMPANY", or if not at liberty to disclose the company name, something like "Acquired by large German corporation".
* Add contact information. If I were to stumble upon your CV online and be interested in hiring you, I should not have to spend time Googling for some way to contact you. Listing your email in the footer is sufficient.
* I assume not getting your degree yet is because of a few pending courses - no need to mention this here. You did the majority of the work and actually completed a thesis project. When you are invited to an interview, disclose it there.
* Some of the stuff you list (machine learning libraries, contributions to poker theory) sound really interesting. Is there anything you could link to here? Blog post, Github, etc.
I accomplished X, relative to Y, by doing Z.
For example for SocialInsight.io:
- Instead of "Instagram Analytics": Helped X customers increase their Instagram revenue / engagement / etc by Y% by building an Instagram analytics Software-as-a-Service.
- Instead of "Big Data on a small server": Enabled fast in-memory Instagram stream processing by develping an optimized in-memory storage format that brought the per-post in-memory size from 300kb to 30kb. (I have no idea what you did, but actually frame the technical achievement that made it possible.)
What'd I'd like to know if I were assessing your CV as a potential hire:
SocialInsight.io: Is it just you, or a team? How many customers? How much revenue? Did you individually do all the design, coding, marketing? or did you have co-founders / other employees.
Freelance Data Mining: What was the most impressive job that you worked on? How many jobs have you done? I'd drop the $80/hr, and don't mention Upwork. Could get away with replacing Freelance Data Mining with "Data Mining Consultant."
Affiliate Marketing, need things like:
- Built a successful direct response affiliate marketing business that funnelled insurance leads to brokerages.
- Developed a custom analytics suite which allowed me to increase conversion rates from x% to x% and decrease traffic acquisition costs by x%.
Professional Online Poker: This is super interesting.
- Funded my university study through playing Limit Hold'Em poker online.
- Contributed "blah" to modern poker theory.
- Ran analytics on opponent data to gain an edge, etc.
Things to remove:
- Note about girlfriend's death
- "Machine Learning before it was cool" makes you sound like somebody I wouldn't want to work with. Better to say "Pioneered the application of machine learning to blah aspect of online poker. Built out custom machine learning libraries to implement methods blah."
- "I believe that inbound communication should come to me. I should not have to hunt for it" Wording of this is a red flag for me. Also, no real idea of what Zophoz is.
Echoing petervandijck: Definitely create a LinkedIn profile and create (or list) your Github account.
Then write: "10 years of software development experience ranging from low level C all the way to front-end web dev. 7 years of data science experience. Startup founder."
Then write a sentence of the type of work you're looking for. (startup? freelance?)
Then list your experience, with dates, going back those 10 years of experience. Remove rates. (the 80$)
Also, you should put this on LinkedIn (regardless of how much you hate it.)
How i would do it, requiring no user input:
* Designate a hot folder on the NAS, where i put all the videos to be uploaded.
* Establish a list of what has been transferred (nothing initially).
* From the Pi, poll the NAS folder for files that haven't been transferred yet.
* If a file is found, cat file | curl --data-binary @- POST it to YT.
* On success, record the transferred filename.
* Continue polling.
Of course, you can quite easily bolt-on a web interface to this, by exposing some of the steps as API endpoints.