1. Resizing with imagemagick: https://bash.rocks/Gxlg31/3
2. Resizing and convert to webp: https://bash.rocks/7J1jgB/1
After creating the snippet, you could either use GET https://bash.rocks/0Be95B (query parameters become environment variable) or POST https://bash.rocks/jJggWJ (request body become stdin).
It's not hard to roll your backend like this for private usage (simply exec from node). I'm also working on an open source release.
Images are complicated and important enough that I don't see that changing any time soon.
It compresses and optimizes png, gif, and jpeg, creates webp for browsers that support it, inlines small images into your html, longcaches images, and even creates srcsets.
The only tool I ever found which does this job reliable even for huge images is http://www.vips.ecs.soton.ac.uk .
Be especially careful with these utilities when running them on UGC. PNG / JPEG bombs can easily cause OOM or CPU DoS conditions etc.
As for metadata, today I decided to add it back in.
For ecommerce it will eventally help to have product data, e.g. brand, product name etc embedded in the image.
My other tip if you go the Imagemagick/PAGESPEED route then you can use 4:2:2 colour space and ditch half the bits used for chroma.
Tbh the UGC side is just triggering the "build process side" as the upload occurs.
As far as best,
I'd suggest you look there for some decent examples of how to go about it. They may be defunct but I use a similar approach (slightly different knob tweaks with the same binaries) and it works fine. May not be 100% optimal but its good enough imo.
Edit: You haven't heard from them because they are aiming very high so it will take years before any of their work hits the general public.
Edit 2: From my understanding, they are still working on their Universal Basic Income Research as well and have chosen Oakland as the testbed: http://basicincome.org/news/2017/04/httpswww-youtube-comwatc....
For those who like the outdoors, just get your off road vehicle and face the indomitable and untouched nature. No paved roads, no concrete, nothing outside these habitable malls interconnected by hyperloops. Of course there will be supply roads for trucks but they will be just like highways interconnecting mega farms to mega malls.
Nah, scratch that, there is nothing like a house in the suburbs with a huge yard and a barbecue.
AFAIK, Ben Huh is still in charge of the project.
 http://www.subtext-lang.org/AboutMe.htm https://twitter.com/jonathoda/status/871784998113882118
1) Large order sent to market
2) Exchanges with a serious lack of liquidity
3) stop loss orders making things worse.
Everyone has their personal pet peeves, mine is stop loss orders. It's one of the 3 things that amateurs tend to use with out any understanding of markets. The other two being use of margin, probably doesn't need any explanation, and trading currencies/currency pairs.
In today's markets, stop loss orders are like market orders........99.99% of the time only people who don't know what they are doing use them.
1. Took too long to get something working. The common use case of hooking up a Lambda function to an HTTP endpoint is surprisingly fiddly and manual.
2. Very painful logging/monitoring.
3. The Node.js version of Lambda has a weird and ugly API that feels like it was designed by a comittee with little knowledge of Node.js idioms.
4. The Serverless framework produces a huge bundle unless you spend a lot of effort optimising it. It's also very slow to deploy incremental changes edit: this is not only due to the large bundle size but also due to having to re-up the whole generated CloudFormation stack for most updates.
5. It was worth it in the end for making a useful little service that will exist forever with ultra-low running costs, but the developer experience could have been miles better, and I wouldn't want to have to work on that codebase again.
Edit: here's the code: https://github.com/Financial-Times/ig-images-backend
To address point 3 above, I wrote a wrapper function (in src/index.js) so I could write each HTTP Lambda endpoint as a straight async function that simply receives a single argument (the request event) and asynchronously returns the complete HTTP response. This wouldn't be good if you were returning a large response though; you'd probably be better streaming it.
My #1 concern with it went away a while back when Amazon finally added support for Python 3 (3.6).
It behaved as advertised: Allowed us to scale without worrying about scaling. After a year of using it however I'm really not a big fan of the technology.
It's opaque. Pulling logs, crashes and metrics out of it is like pulling teeth. There's a lot of bells and whistles which are just missing. And the weirdest thing to me is how people keep using it to create "serverless websites" when that is really not its strength -- its strength is in distributed processing; in other words, long-running CPU-bound apps.
The dev experience is poor. We had to build our own system to deploy our builds to Lambda. Build our own canary/rollback system, etc. With Zappa it's better nowadays although for the longest time it didn't really support non-website-like Lambda apps.
It's expensive. You pay for invocations, you pay for running speed, and all of this is super hard to read on the bill (which function costs me the most and when? Gotta do your own advanced bill graphing for that). And if you want more CPU, you have to also increase memory; so right now our apps are paying for hundreds of MBs of memory we're not using just because it makes sense to pay for the extra CPU. (2x your CPU to 2x your speed is a net-neutral cost, if you're CPU-bound).
But the kicker in all this is that the entire system is proprietary and it's really hard to reproduce a test environment for it. The LambCI people have done it, but even so, it's a hell of a system to mock and has a pretty strong lock-in.
We're currently moving some S3-bound queue stuff into SQS and dropping Lambda at the same time could make sense.
I certainly recommend trying Lambda as a tech project, but I would not recommend going out of your way to use it just so you can be "serverless". Consider your use case carefully.
Lambdas have a lot of benefits - for occasional tasks they are essentially free, the simple programming model makes them easy to understand in teams, you get Amazon's scaling and there's decent integration with caching and logging.
However, especially since I had to use them for whole solution, I ran into a ton of limitations. Since they are so simple, you have to pull in a lot of dependencies which negate a lot of the ease of understanding I mentioned before. The dependencies are things like Amazon's API Gateway, AWS Step Functions, and AWS CLI itself, which is pretty low-level. So now, the application logic is pretty easy, but now you are dealing with a lot of integration devops. There's API Gateway is pretty clunky and surprisingly slow. Lambdas shut themselves down, and restarting is slow. The Step Functions have a relatively small payload limit that needs to be worked around. Etc. So use them sparingly!
One thing to note. API Gateway is super picky about your response. When you first get started you may have a Lambda that runs your test just fine but fails on deployment. Make sure you troubleshoot your response rather than diving into your code.
I saw some people complaining about using an archaic version of Node. This is no longer true. Lambdas support Node V6 which, while not bang up to date, is an excellent version.
Anyway, I can attest it is production ready and at least in our usage an order of magnitude cheaper.
- CPU power also scales with Memory, you might need to increase it to get better responses
- Ability to attach many streams (Kinesis, Dynamo) is very helpful, and it scales easily without explicitly managing servers
- There can be a overhead, your function gets paused (if no data incoming) or can be killed undeterministically (even if it works all the time or per hour) and causes cold start, and cold start is very bad for Java
- You need to make your JARs smaller (50MB), you cannot just embed anything you like without careful consideration
Claudia.js also has an API layer that makes it look very similar to express.js versus the weird API that Amazon provides. I would not use lambda + JS without claudia.
For usage scenarios, one endpoint is used for a "contact us" form on a static website, another we use to transform requests to fetch and store artifacts on S3. I can't speak toward latency or high volume but since I've set them up I've been able to pretty much forget about them and they work as intended.
Development can be tricky, there are a lot of of all in one solutions like the serverless framework, we use Apex CLI tool for deploying and Terraform for infra. These tools offer a nice workflow for most developers.
Logging is annoying, its all cloudwatch, but we use a lambda to send all our cloudwatch logs to sumologic. We use cloudwatch for metrics, however we have a grafana dashboard for actually looking at those metrics. For exceptions we use Sentry.
Resources have bitten us the most, not enough memory suddenly because the payload from a download. I wish lambda allowed for scaling on a second attempt so that you could bump its resources, this is something to consider carefully.
Encryption of environment variables is still not a solved issue, if everyone has access to the AWS console, everyone can view your env vars, so if you want to store a DB password somewhere, it will have to be KMS, which is not a bad thing, this is usually pretty quick, but does add overhead to the execution time.
Terrible deploy process, especially if your package is over 50mb (then you need to get S3 involved). Debugging and local testing is a nightmare. Cloudwatch Logs aren't that bad (you can easily search for terms).
We have been using Lambdas in production for about a year and a half now, to do 5 or so tasks. Ranging from indexing items in Elasticseaech, to small CRON clean up jobs.
One big gripe around Lambads and integration with API Gateway is they totally changed the way it works. It use to be really simple to hook up a lambda to a public facing URL so you could trigger it with a REST call. Now you have to do this extra dance with configuring API Gateway per HTTP resource, therefore complicating the Lambda code side of things. Sure with more customization you have more complexity associated with it, but the barrier to entry was significantly increased.
* Games are developed as command line tools which use JSON for input and output. They're pure so the game state is passed in as part of the request. An example is my implementation of Lost Cities
* Games are automatically bundled up with a NodeJS runner and deployed to Lambda using Travis CI
* I use API Gateway to point to the Lambda function, one endpoint per game, and I version the endpoints if the game data structures ever change.
* I have a central API server which I run on Elastic Beanstalk and RDS. Games are registered inside the database and whenever players make plays, Lambda functions are called to process the play.
I'm also planning to run bots as Lambda functions similar to how games are implemented, but am yet to get it fully operational.
Apart from stumbling a lot setting it up, I'm really happy with how it's all working together. If I ever get more traction I'll be interesting to see how it scales up.
I was initially attracted to it as a low-cost tool to run a database (RDS) powered service side project.
- Zappa is a great tool. They added async task support  which replaced the need for celery or rq. Setting up https with let's encrypt takes less than 15 minutes. They added Python 3 support quickly after it was announced. Setting up a test environment is pretty trivial. I set up a separate staging site which helps to debug a bunch of the orchestration settings. I also built a small CLI  to help set environment variables (heroku-esque) via S3 which works well. Overall, the tooling feels solid. I can't imagine using raw Lambda without a tool like Zappa.
- While Lambda itself is not too expensive, AWS can sneak in some additional costs. For example, allowing Lambda to reach out to other services in the VPC (RDS) or to the Internet, requires a bunch of route tables, subnets and a nat gateway. For this side project, this currently costs way more running and invoking Lambda.
- Debugging can be a pain. Things like Sentry  make it better for runtime issues, but orchestration issues are still very trail and error.
- There can be overhead if your function goes "cold" (i.e. infrequent usage). Zappa lets you keep sites warm (additional cost), but a cold start adds a couple of seconds to the first-page load for that user. This applies more to low volume traffic sites.
Overall: It's definitely overkilled for a side project like this, but I could see the economics of scale kicking in for multiple or high volume apps.
- No straight way to prevent retries. (Retries can crazily increase your bill if something goes wrong)
- API gateway to Lambda can be better. (For one, Multipart form-data support for API gateway is a mess)
- (For NodeJs) I don't see why the node_modules folder should be uploaded. (Google cloud functions downloads the modules from the package.json)
Anyways, I'd recommend starting from learning the tools without using a framework first. You can find two coding sessions I published on Youtube.
- works as advertised, we haven't had any reliability issues with it
- responding to Cloudwatch Events including cron-like schedules and other resource lifecycle hooks in your AWS account (and also DynamoDB/Kinesis streams, though I haven't used these) is awesome.
- 5 minute timeout. There have been a couple times when I thought this would be fine, but then I hit it and it was a huge pain. If the task is interruptible you can have the lambda function re-trigger itself, which I've done and actually works pretty once you set up the right IAM policy, but it's extra complexity you really don't want to have to worry about in every script.
- The logging permissions are annoying, it's easy for it to silently fail logging to to Cloudwatch Logs if you haven't set up the IAM permissions right. I like that it follows the usual IAM framework but AWS should really expose these errors somewhere.
- haven't found a good development/release flow for it. There's no built-in way to re-use helper scripts or anything. There are a bunch of serverless app frameworks, but they don't feel like they quite fit because I don't have an "app" in Lambda I just have a bunch of miscellaneous triggers and glue tasks that mostly don't have any relation to each other. It's very possible I should be using one of them anyway and it would change how I feel about this point.
We use Terraform for most AWS resources, but it's particularly bad for Lambda because there's a compile step of creating a zip archive that terraform doesn't have a great way to do in-band.
Overall Lambda is great as a super-simple shim if you only need to do one simple, predictable thing in response to an event. For example, the kind of things that AWS really could add as a small feature but hasn't like send an SNS notification to a slack channel, or tag an EC2 instance with certain parameters when it launches into an autoscaling group.
For many kinds of background processing tasks in your app, or moderately complex glue scripts, it will be the wrong tool for the job.
Here are my recommendations:
1) Use Serverless Framework to manage Functions, API-Gateway config, and other AWS Resources
2) CloudWatch Logs are terrible. Auto-stream CloudWatch Logs to Elastic Search Service and Use Kibana for Log Management
3) If using Java or other JVM languages, cold starts can be an issue. Implement a health check that is triggered on schedule to keep functions used in real-time APIs warm
Here's a sample build project I use: https://github.com/bytekast/serverless-demo
For more information, tips & tricks: https://www.rowellbelen.com/microservices-with-aws-lambda-an...
One thing to be careful of, if you're targeting input into dynamodb table(s), then it's really easy to flood your writes. Same goes for SQS writes. You might be better off with a data pipeline, and slower progress. It really just depends on your use case and needs. You may also want to look at Running tasks on ECS, and depending on your needs that may go better.
For some jobs the 5minute limit is a bottleneck, others it's the 1.5gb memory. Just depends on exactly what you're trying to do. If your jobs fit in Lambda constraints, and your cold start time isn't too bad for your needs, go for it.
a few years back, the mantra was "hardware is cheap, developer time isn't". when did this prevailing wisdom change? Why would people spend hours/days/weeks wrestling with a system to save money which may take weeks, months or even years to see an ROI?
- You can't trigger Lambda off SQS. The best you can do is set up a scheduled lambda and check the queue when kicked off.
- Only one Lambda invocation can occur per Kinesis shard. This makes efficiency and performance of that lambda function very important.
- The triggering of Lambda off Kinesis can sometimes lag behind the actual kinesis pipeline. This is just something that happens, and the best you can do is contact Amazon.
- Python - if you use a package that is namespaced, you'll need to do some magic with the 'site' module to get that package imported.
- Short execution timeouts means you have to go to some ridiculous ends to process long running tasks. Step functions are a hack, not a feature IMO.
- It's already been said, but the API Gateway is shit. Worth repeating.
Long story short, my own personal preference is to simply set up a number of processes running in a group of containers (ECS tasks/services, as one example). You get more control and visibility, at the cost of managing your own VMs and the setup complexity associated with that.
A few pointers (from relatively short experience):
- The best UC for Lambda seems to be stream processing where latency due to start up times is not an issue
- For user/application-facing logic the major issue seems to be start-up-times (esp. JVM startup times when doing Java or your API gets called very rarely) and API Gateway configuration management using infrastructure as code tools (I'd be interested in good hints about this, especially concerning interface changes)
- The programming model is very simple and nice but it seems to make most sense to split each API over multiple lambdas to keep them as small as possible or use some serverless framework to make managing the whole app more easy
- This goes without saying, but be sure to use CI and do not deploy local builds (native binary deps)
Then we implemented a RESTful API with API Gateway and Lambda. The Lamdbas are straightforward to implement. API Gateway unfortunately has not a great user experience. It feels very clunky to use and some things are hard to find and understand. (Hint: Request body passthrough and transformations).
Some pitfalls we encountered:
With Java you need to consider the warmup time and memory needed for the JVM. Don't allocate less than 512MB.
Latency can be hard to predict. A cold start can take seconds, but if you call your Lambda often enough (often looks like minutes) things run smooth.
Failure handling is not convenient. For example if your Lamdba is triggered from a Scheduled Event and the lamdba fails for some reason. The Lamdba does get triggered again and again. Up to three times.
So at the moment we have around 30 Lambdas doing their job. Would say it is an 8/10 experience.
Doesn't like big app binaries/JARs and Amazon's API client libs are bloated - Clojure + Amazonica goes easily over the limit if you don't manually exclude some Amazon's API JDKs from the package.
On the plus side, you can test all the APIs from your dev box using the cli or boto3 before doing it from the lambda.
Would probably look into third party things like Serverless next time.
Since then I've been using Serverless for all my projects and it's the best thing I've tried thus far. It's not perfect, but now I'm able to abstract everything away as you configure pretty much everything from a .yml file.
With that said, there are still some rough spots with Lambda:
1) Working with env vars. Default is to store them in plain text in the Lambda config. Fine for basic stuff, but I didn't want that for DB creds. You can store them encrypted, but then you have to setup logic to decrypt in the function. Kind of a pain.
2) Working within a subnet to access private resources incurs an extra delay. There is already a cold start time for Lambda functions, but to access the subnet adds more time... Apparently AWS is aware and is exploring a fix.
3) Monitoring could be better. Cloudwatch is not the most user friendly tool for trying to find something specific.
With that said, as a whole Lambda is pretty awesome. We don't have to worry about setting up ec2 instances, load balancing, auto scaling, etc for a new api. We can just focus on the logic and we're able to roll out new stuff so much faster. Then our costs are pretty much nothing.
I think a lot of people try to use the "serverless" stuff for unsuitable workloads and get frustrated. We are running a kubernetes cluster for the main stuff but have been looking for areas suitable for lambda and try to move those.
I'm not allowed to give you any numbers; here's an old blogpost about Sketch Cloud: https://awkward.co/blog/building-sketch-cloud-without-server... (however, this isn't accurate anymore). For this use-case, concurrent executions for image uploads is a big deal (a regular Sketch document can easily exist out of 100 images). But basically the complete API runs on Lambda.
Running other languages on Lambda can be easily done and can be pretty fast, because you simply use node to spawn a process (Serverless has lots of examples of that).
Let me know if you have any specific questions :-)
Hope this helps.
I do remember logging being a confusing mess when I was trying to get this started. I feel better about the trouble I had now that I see it wasn't just me. But for a side project that's very simple to use, Lambdas have been a blessing. I get this functionality without having to manage any servers or create my own API with something like Python+Flask. Having IAM and authentication built in for me made the pain from the initial set-up so worth it.
The worst part about it by far is CloudWatch, which is truly useless.
Check out https://github.com/motdotla/node-lambda for running it locally for testing btw - saved us hours!
1. Installing your own linux modifications isn't trivial (we had to install the bpg encoder). They use a strange version of the linux ami.
2. Lambda can listen to events from S3 (creation,deletion,..) but can't seem to listen to SQS events WTF? It seems like amazon could fix this really easily.
3. Deployment is wonky. To add a new lambda zip file you need to delete the current one. This can take up to 40 seconds (which you would have total downtime).
For logging, we pipe all of our logs out of CloudWatch to LogEntries with a custom Lambda, although looking at CloudWatch logs works fine most of the time.
- Runs fast, unless your function was frozen for not enough usage or the like
- Easy to deploy and/or "misuse"
- Debugging doesn't really work
All in all, probably the least painful thing I've used on AWS. But that doesn't necessarily mean much.
Building reactive systems with AWS Lambda: https://vimeo.com/189519556
We also use it to perform scheduled tasks (e.g. every hour) which is good as it means you don't have to have an EC2 instance just to run cron like jobs.
The main downside is Cloudwatch Logs, if you have a Lambda that runs very frequently (i.e. 100,000+ invocations a day) the logs become painful to search through, you have to end up exporting them to S3 or ElasticSearch.
It fails once in a while and the experience is bad, but that's mostly due to our tooling around failure states instead of the platform itself.
Need to say, that you should use gordon<https://github.com/jorgebastida/gordon> to manage it, Gordon makes the process easier.
API Gateway is a little rougher, but slowly getting there.
- For serverless APIs for querying the S3 which is a result of the above workload
Difficulties faced with Lambda(till now):
1. No way to do CD for Lambda functions. [Not yet using SAM]
2. Lambda launches in its own VPC. Is there a way to make AWS launch my lambda in my own VPC? [Not sure.]
Would be really great to have this configurable along with CPU/memory.
Additionally being able to mount and EFS volume would be very useful!
The only negatives are:- cold start is slow, especially from within a VPC- debugging/logging can be a pain- giving a function more memory (~1GB) always seems to be better (I'm guessing because of the extra CPU)
- The CPU power available seems to be really weak. Simple loops running in NodeJS run way way slower on Lambda compared to a 1.1 GHz Macbook by a significant magnitude. This is despite scaling the memory up to near 512mb.
- Certain elements, such as DNS lookups, take a very long time.
- The CloudWatch logging is a bit frustrating. If you have a cron job it will lump some time periods as a single log file, other times they're separate. If you run a lot of them its hard to manage.
- Its impossible to terminate a running script.
- The 5 minute timeout is 'hard', if you process cron jobs or so, there isn't flexibility for say 6 minutes. It feels like 5 minutes is arbitrarily short. For comparison Google Cloud Functions let you work 9 minutes which is more flexible.
- The environment variable encryption/decryption is a bit clunky, they don't manage it for you, you have to actually decrypt it yourself.
- There is a 'cold' start where once in a while your Lambda functions will take a significant amount of time to start up, about 2 seconds or so, which ends up being passed to a user.
- Versions of the environment are updated very slowly. Only last month (May) did AWS add support for Node v6.10, after having a very buggy version of Node v4 (a lot of TLS bugs were in the implementation)
- There is a version of Node that can run on AWS Cloudfront as a CDN tool. I have been waiting quite literally 3 weeks for AWS to get back to me on enabling it for my account. They have kept up to date with me and passed it on to the relevant team in further contact and so forth. It just seems an overly long time to get access to something advertised as working.
- If you don't pass an error result in the callback callback, the function will run multiple times. It wont just display the error in the logs. But there is no clarity on how many times or when it will re-run.
- There aren't ways to run Lambda functions in a way where its easy to manage parallel tasks, i.e to see if two Lambda functions are doing the same thing if they are executed at the exact same time.
- You can create cron jobs using an AWS Cloudwatch rule, which is a bit of an odd implementation, CloudWatch can create timing triggers to run Lambda functions despite Cloudwatch being a logging tool. Overall there are many ways to trigger a lambda function, which is quite appealing.
The big issue is speed & latency. Basically it feels like Amazon is falling right into what they're incentivised to do - make it slower (since its charged per 100ms).
PS: If anyone has a good model/providers for 'Serverless SQL databases' kindly let me know. The RDS design is quite pricey, to have constantly running DBs (at least in terms of the way to pay for them)
- Use environment variables
- Use step functions to create to create state machines
- Deploy using cloudformation templates and serverless framework
You don't need to use the API gateway.
Just talk direct to Lambda.
It's a bluetooth device that allows controlling the mouse cursor with body movement (head or finger etc) It's cheaper. Coupled with a free dwell clicking software, should work!
2. Eye tracker - there are a lot of options, visit reddit.com/r/eyetracking that and reddit.com/r/ALS and ask them for advice. These devices let you control a PC with your eyes are especially designed for people who have ALS. The ones that work really well cost money, but most insurance companies cover them in full. Avoid Tobii, they are not reliable and are more marketing than anything. Mygaze,LC Technologies, Eyetech digital, smi vision. These are all companies you can trust. All should offer free trail periods and should have a rep who can come and visit your dad to do an evaluation. If they don't offer at minimum 2 week trail, they're not a trusted company. Secondly you can contact your local cities AT clinic they have donated equipment for situations like this.
I hope this helps!
Quadriplegic just means all four limbs are impaired. The degree of impairment can vary substantially. One of these men had use of his arms, but did not have full use of his hands. He drove himself to work, had a full time job, wife and kid. He broke his neck in a pool accident in his teens. He used a manual wheelchair. He was able to use a manual wheelchair because he had use of his arms. He chose it over an automatic wheelchair to get in regular exercise.
The other was substantially more impaired. He broke his neck in a riding accident later in life. He had been a brilliant surgeon. He used an automated wheelchair. I think he had partial use of one arm and maybe a couple of fingers, which allowed him to navigate a smartphone with that hand. He came in once a week for a few hours to review surgical reports for the company. When ordinary claims processors (like I was) could not figure out if the surgery was covered and their boss with more training couldn't either, we printed off the entire file and hand delivered the paper version to this man on Friday afternoon. I had one claim go to him and hand walked my papers to the meeting.
I also attended an educational talk given by the two of them. This is how I know how they each broke their neck and other details.
Since your father was a consultant, he may be able to return to doing consulting work at some point in the future. The specialized knowledge in his head does not stop being valuable just because of his physical limitations. I am mentioning this because new quadriplegics are often suicidal. They feel that life is simply over. It's not. He was a professor and consultant, like this former surgeon, his knowledge and expertise still has value. Even though the former surgeon could no longer work as a surgeon, his knowledge of surgery was valuable and he had a unique very part time job at a world class company.
Depending on the exact details of your father's limitations, he may also benefit from the use of ordinary things like smart phones with apps. There are also a lot of non-tech assistive devices, like chairs to help them shower and spoons that can be strapped to their hand so they can feed themselves if they have arm movement but limited hand control.
It occasionally has to be reset by hand if the voice recognition locks up, which is the only barrier. But I'm fairly certain it's the best option available for people in your father's situation.
First, if your dad can still move his head you can use Apple's assistive tech to "tab" through the items on the screen with a turn one way, and "clicking" on an item by turning his face the other.
Second, MS Windows' voice control is actually really decent. You can browse, search, send emails, etc. all with your voice. It takes some training (both for the user and the machine) but my dad has gotten pretty quick with his.
Lastly, there's a bunch of eye trackers out there now, and you can use them for a lot of things. I setup CameraMouse (http://www.cameramouse.org/) for when voice wasn't quite cutting it (or my dad got tired of talking.)
Unfortunately, there's no perfect solution, and all require time to adjust.
Source: https://www.twitch.tv/nohandsken quadriplegic streamer who plays Diablo/Path of Exile, Heroes of the Storm, World of Warcraft, etc. (I encourage Amazon Prime subscribers to give him ~$2.50 every 30 days via their free Twitch sub! https://help.twitch.tv/customer/portal/articles/2574674-how-... )
slightly related/helpful discussion: https://github.com/melling/ErgonomicNotes
I remember hearing about this project some time ago: https://github.com/OptiKey/OptiKey
It might be helpful as it's an open-source project and if extra features are needed you might be able to add them yourself if you are a programmer.
I mentally bookmarked it because I felt it would be a good "make the world a better place" type project to contribute to if I ever had some spare time.
Thanks for bringing this request here to allow the community the chance to contribute!
- Smartnav (if Mac you need to buy via 3rd party, but includes the software)
Fairly expensive, there are other variants that cost less/more, some gaming devices like TrackIr might work as well?Possible that health insurance would pay for these types of devices?
I personally use Smartnav about 50% of the time I am programming, along with Dragon/Voicecode due to RSI issues.
Smartnav + Dragon might be enough for using laptop/desktop, not so much for mobile devices. If he actually programs I would recommend voicecode.
All of these technologies have a massive learning curve.
You might want to checkout the voicecode forum and slack channel, I know there are some quadriplegic programmers in that community who would have better insight than I.
1st a voice setup with Alexa or similar can really help.
With regards to phone use, some of our users have an attachment to put the phone close to their head and use their nose to "click/select" (they can move their head).
Eye tracking technology is really impressive these days (can be as fast as using a mouse). I've recently demoed a system with a Tobii sensor (https://www.tobii.com/) that was hooked up to a laptop, very impressive when combined with appropriate software (it handles scrolling, keyboard shortcuts, etc in a custom interface). I'm not sure with regards to phone/tablet use how well they integrate.
Ping me on Linkedin if you'd like to talk more.
I'm truly sorry about your dad. That's a scary situation for him to be thrust into.
I have tried most of the commercial solutions available and I think the best headmouse for your dad would be Zono mouse http://www.quha.com/products-2/zono/. It is very easy to use and is as accurate as normal table-top mouse.
I think they've created software that can bypass captchas and will work with you to develop software that can help your dad.
Tecla is great you should give it a try. Depending on his comfort and ability a head tracking mouse from Orin is pricey but works really well with a laptop/desktop setup. Dragon Naturally speaking is useful too.
Also he should make an appointment with a local assistive technology practitioner soon to get a run down of all the options, both low and high tech. You can find these ATP folks at most all rehab hospitals.
Sepsis now dominates the hospital ICU. It is what kills most AIDS patients too. Antibiotic resistance is driving costs.The ICU is now 40% of US hospital budgets.This is bankrupting state and federal budgets. This is why medicare, medicaid, obamacare are bankrupting US Govt (Fed and State) budgets.In 2013 health dominated state budgets.
State spending on health care now exceeds education spending. Look at NM's past budgets.http://www.usgovernmentspending.com/compare_state_spending_2...
Today 1/4 of US VA and Indian Health patients are diabetic. US Defense Dept. funding must now compete with Medicare.Today 40% of hospital costs are for growing ICU's and chronic disease. 1/2 of US Medicare cost = chronic disease from diabetes.
NM ICU's are dominated by chronic disease. http://www.amazon.com/Where-Night-Is-Day-Politics/dp/0801451...
40% of US hospital budgets now pay ICU/chronic disease costs. This cost is going up annually. http://money.cnn.com/video/technology/2013/07/24/fortune-tra...
Can MinION help pre ICU patients better control diet and sepsis infection.http://www.bloomberg.com/news/articles/2015-06-03/deadly-inf...A complete bacterial genome with MinIONhttp://www.nature.com/nmeth/journal/v12/n8/abs/nmeth.3444.ht...
Minion can find septic bacteria fast.https://genomemedicine.biomedcentral.com/articles/10.1186/s1...
1. Time became my most valuable asset. Everything was filtered through the lens of "does this save me time?" and so I optimized everything: The gym (worked out at home), shopping (got delivered), dating (used fleshl...joking! :).
In the words of Joel Spolsky, "Every day that we spent not improving our products was a wasted day".
2. I worked harder than ever before. My job was tough but output ebbed and flowed with meetings, management, plus the usual office time wasters. The startup workday is more straightforward: wake up, coffee, write code, listen to users, coffee, learn how to add value to the market, coffee.
3. Every two months or so I look back and shake my head at how lame the product was, how little I knew, and how inefficient my workflow was. Which is to say, I continue to learn at an incredible clip yet realize I still don't know a thing. I expect this trend to continue - if it doesn't, I'm not growing.
So, yeah, overall it's been An Incredible Journey. My only regret is that I didn't start sooner.
It's actually a gift how easy it is to go from idea to product to business. To paraphrase Murakami, 'If you're young and talented (or can code), it's like you have wings'.
We're living in the best of times.
Anyway i started contracting last year for this exact reason, at around 300-400 day rate and now i've saved enough to quit and follow my 'dream', my last day is JULY 7th. I have enough savings to last me 2 years, sustaining my current social life. no frugalness.
When you're building a Nights and Weekends side project, you get used to stealing whatever free hours you can to work on the product. But you also necessarily build things so that they don't take up much of your time every day. If they did, it would interfere with your day job and that just wouldn't work.
So when you remove the day job, you find that suddenly you have this successful business that runs itself in the background and you can do pretty much whatever you want with your day.
Most people in this situation will immediately fill that time up with work on the product, and I did to some extent. But I also made sure to take a bunch of that time just to enjoy with my family. I eventually settled on 2-3 days a week where I was "at work", with the rest devoted to other pursuits. Both me and my wife are rock climbers, which is an "other pursuit" that will happily expand to fill the time available. We're also parents, so ditto there.
I also make a point of downing tools for a while from time to time. Again, because I can.
I took the kids out of school and dragged them off backpacking around SouthEast Asia for a few months the first year. We did a couple more medium sized trips this year, and I took the entire Fall and Spring off because those are the best times for Bouldering in the forest here. Again, work is happy to ramp up or down to accommodate because I never shifted it out of that ability to run on nights and weekends.
So now, I burst for a few weeks at a time on work stuff (with possibly a more relaxed definition of Full Time than most would use), then slow down and relax for a bit.
It's actually not so bad.
Since I've been working as a mobile developer, and also management consultant (my other career), it's always been extremely easy for me to find a new job whenever I needed, so there has been very little risk involved.
Still, it did require some savings, since our startup is very research intensive and will take several years before we see any revenue. We secured some basic funding now though, and things are looking good for the next stage too so I will only have needed 6 months or so of buffer.
In summary, my view is that if you're a reasonably skilled engineer or have some other attractive occupation, there is nothing to fear. The worst that can happen is really that your startup doesn't work, you'll go back to what you did before with a few months of missed income but with plenty of useful experience.
I don't think there are many situations or cultures where a failed technology startup attempt on your rsum would count against you in any way, in most places quite the opposite.
On the second time I created this plugin: http://plugins.netbeans.org/plugin/61050/pleasure-play-frame... I tried to make a living of this, I only sold 5 licenses of 25 dollars per year and to develop this plugin took me 2 and a half months of hard work.
Didn't have a problem getting a new job both times.
Having the luxury to focus on one thing, rather than juggling several, is much like having an office that is neat, tidy, and uncluttered. It feels good in the same way. At least by quitting a job and focusing on a startup, you have the option to focus 100% on it. Actually focusing 100% on one thing is a difficult skill in itself, even with the right circumstances; however, it's completely impossible (at least for me) with two fulltime jobs at once, especially jobs like teaching (which involve lots of public speaking at scheduled times) or running a website with paying customers (which demands, e.g., responding to DDOS attacks).
I dream of working for myself but I've never taken the plunge. My income from side projects is about 1/3 of the way to my minimum number to quit and go full time.
I do a lot of thinking about this, my number is the same as my financial independence / early retirement number.
One of the biggest things that holds me back is medical insurance for a family of 5. Having an employer offsets this cost a LOT.
I didn't take the plunge until quite late on, waiting until it was making enough money to comfortably cover my personal expenses. No regrets there - growth was slow in the early days and if I hadn't had the luxury of a monthly pay packet, I probably would have given up before I got the chance to properly validate the business.
Transitioning to full time brought more stress than I expected, but the experience is priceless. In the past few months alone I've learnt more than I did in 3 years of employment.
Realistically, what's the worst case scenario? I'm a reasonably skilled dev in a strong market so there's not much to lose. If it all goes wrong I'll get another job with a load of experience (and stories!) under my belt.
I reached a point after about 4 months where I realised the journey to make the business profitable would most likely be a five year slog, and while the opportunity was there it wasn't a cause I felt I could devote 5 years of my life to.
So gave the software for free to the people that were helping with beta testing and went back to my job. I found the most positive thing was how it helped propel my career at my current employer as I got a better role, they seem to have more respect for me afterwards and I operate more independently now.
So I suppose if you can build some sort of safety net before quitting that helps.
It was scary as hell (no revenue coming from the startup), fun as hell, challenging as hell. Had my savings all planned out to help support the adventure, but still had that daily stress of knowing every dollar I spent was not coming back anytime soon. That part wasn't fun. But I didn't have kids or a mortgage and knew this was my chance to do something of the sort.
10/10 would do again in a similar situation, though knowing what I know now, I might have launched a business instead of chased a cool idea.
I'm back working for a startup again now so I guess I'm just going back and forth.
I've worked for a few startups and none of them has had an exit yet but one of those I have shares in is doing relatively well.
Doing contracting work is a smarter decision in general. You can actually plan to make a sizeable amount of money and then watch it happen without taking any risks - It's all within your control. With startups, you might often feel that it's outside of your control, especially if you're not a co-founder.
I have a previous coworker who'd love to help me, but I don't want to babysit his work and I feel he's not valuable enough to the business. I would like another cofounder, but it doesn't bother me that I'm doing it all on my own, I have spent the last 10 years getting ready for this, so I'm more than ready. I am doing more than okay on my business alone, but I wish I had some expertise for a second opinion. I am really really thinking about going into an accelerator program or seeking angel investment, but I'm apprehensive about taking cash at this (or any stage). My biggest fear is actually having to get a real job again, I will do anything to prevent that from happening since that means my startup is dead.
As with everything there are pros and cons. The pros are obviously that you get to spend your time doing something you enjoy (hopefully), and can work whenever and wherever you feel like (this can also be a con!). The cons are that you will always be worrying about stuff like churn, whether servers will go down whilst you're away on holiday, how you're going to grow enough to support a family etc etc.
The long, slow Saas ramp of death really is a thing, and there are no silver bullets in terms of growth/marketing - just many small things that all contribute. I also always used to think 'if only I could just get to $x MRR then everything would be so much better and I'd be much more comfortable and relaxed', but when you do eventually break through that barrier you realise you're just more worried about how you are going to achieve the next one, so it's kinda never ending!
I also agree with other posts here that if you're already a decent developer in a good market, then what is the worst that can happen really? Try doing some fearsetting. I'm sure you could always find another job if your thing doesn't work out, but you do need to give these things time. I also failed a bunch of times with other startup ideas, one of which was also YC backed.
There is this assumption that one must build a minimum viable product that has to be released as quickly as possible, so much that it's become startup mantra. It's no surprise that a lot of these products seem to be technically shallow, everyone is reaching for low hanging fruit.
I feel rather alone trying to do something that I think hasn't been done before, or if it had, wasn't executed well. I don't think I could possibly commit to it without having strong motivation, which I struggled with while having a full-time job.
The biggest technical/social challenge I have is to make something that a non-technical user could easily get right away and make something with it. I think the automation of web dev is an inevitability, and frameworks were just a historical blip on this path. The same thing is happening to web design. http://hypereum.com
I still have the original client 3 years later and the company grosses about $3,500 per month and I net $1,250. It pretty much runs itself, requires maybe 2 hours of work every 2 - 3 months. I spent a little over a year trying to grow it from the initial customer with no luck.
Landed a job as a full stack engineer afterwards and I really like it. I am actively looking to start a new project but I will keep my main job while doing it. I had the benefit of a wife who makes a good salary to support me during that prior adventure (Still do :) )
Let's just say that my mistake was that I was too afraid to hurt my co-founder's feelings. If we parted ways when we should have, I might have actually gotten somewhere. (Then again, I might have gotten nowhere either!)
Previously I contracted as a full stack developer bringing in other developers on projects as and when the project timescales wouldn't have been achievable with just me. Running a software consultancy alone, dealing with all of the usual rigmarole of a business and performing proper client outreach was stressful, but financially and personally very rewarding (especially when you close a big deal completely on your own).
In order to get involved in my current startup, which at the outset was comprised of a designer, biz dev (CEO) and myself as CTO I had to cut off ties with my previous clients and dedicate all of my available time to the new startup. I had leveraged myself quite a bit running the previous consultancy as billings were growing year on year, so my VAT/Corporation tax accounts were generally paid out of job fees towards the end of the year rather than set aside throughout the year, leaving me in a negative cash flow position when stopping work for existing clients. Luckily there were some ongoing payments that didn't require development resource, so the small admin time required to invoice and chase up was all that was required, and enabled me to setup payment arrangements with HMRC to settle these liabilities over a period of time, out of this cashflow. Setting up these arrangements was very stressful, and I would strongly advise anyone coming into a startup to fully evaluate their financial situation before committing even if the opportunity seems huge.
Initial salaries in the new start-up were minimal (1000 p/m approx), and it took a solid three years, extremely long working days, almost unmanageable personal stress and around 0.5m of funding before we're now up to an above average average salary, 1m ARR, a team of 15 and strong growth projected for the coming year.
Success is a subjective term and occasionally I have to refocus to see the light at the end of the tunnel, but with enough grit, luck and determination, its possible to tip the balance to a point where success is now more likely than not.
So, if you are planning to leave a job and have a good product which is getting you even half of the money you need. Leaving your job will only increase the chances of success. However, hanging on to the job while working on a product is going to be much harder.
The first time I was two years out of school with $12k in the bank, had a partner with a ton of experience, and a decent idea. We crunched for six months, launched, failed, and then tried to pivot. I ran out of cash a few months before the iPhone launched and had I had a longer runway we could have ported our app to the iPhone and potentially seen success.
A year ago and nine years later than that attempt, I've started a small video game company with another friend (justintimegame.com). Despite my life situation being more complicated and expensive to maintain, the prior nine years success combined with my wife's income basically lets me try and fail until I get sick of it instead of when the money would run out. Obviously I'm aiming for success, but the massively reduced stress from barely worrying about money let's me be much more open to experimentation while also being resilient to failure.
I don't regret starting and failing my first company however. It set me up for having a higher risk threshold and an interest in startups that ended up working out quite well for me.
For somebody like me, and probably a lot of HN readers, its _actually_ a fairly low risk proposition because qualified, experienced software engineers are so sought after. Whatever you are doing, you will always be able to pick up a $1000-$1500 a day gig when you need to bootstrap your actual project.
My old boss has contacted me a few times to see if I want to come back- definitely do not want to.
You talk about "fear", and you talk about a "successful" startup. Here's the thing: You never know if a startup will be successful, and you just have to give it a go for the love of it, rather than any expectation of success. Don't be afraid- there are plenty of worse things in this world than a failed company.
Have learned a lot about bookkeeping.
Expecting to get it right is the failure we all make at some point (even when we say out lout "this might not work out" we still somehow expect it to work). Expecting failure to lead to something positive is the long game I'd urge you to wait for, it's hard to remain in a good mental state at times while you're working hard and feeling under appreciated, but that is sadly just what it's like.
I guess the "quit your job" problem only exists if you have major responsibilities, like a family, or paying debt back. Otherwise, it makes no real sense to consider it, the opportunity is too big.
2. 6 month financial backup is usually not enough. I have heard many stories where people try going independent for 6 months, run out of money and start looking for a job. What happens is - entrepreneurship gets into you in that time and if one goes back to a job, I can bet they feel even more frustrated. You need 1.5 years of backup or 2-3 years of "frugal living backup". I struck positive cashflows in just about 5 months, but it wasn't good enough. I distinctly remember thinking "Maybe, I should have done this part time". Then I struck a mini-gold-mine at 8 months. Having a good backup will help you persist longer. I did not have a growth strategy that worked. But I focused on working and doing the right thing. Keep it rolling.
3. The biggest worry I had when starting was about providing "enough" for my family and any emergencies for next 1.5-3 years at any point in time. Unlike many stories, I promised myself not to wait until I go bankrupt or in a lot of debt - Nearing that is a huge red flag, where I would typically exit and take a regular job. However, taking a job is the last thing I want to do. That thing kept me money-oriented for a while and made me work on stuff that generated positive cashflow.
4. Would it have been possible to return to your old job? - Maybe, but I would not want to. I waited too long to jump ship. Infact, my experience on multiple "good" jobs is what is keeping me away from them. Once you taste entrepreneurship, its hard to go back
5. I do not consider myself successful. May be semi-successful, some people see it as success. But I have come a long way from fearing failures. Success may or may not last long. I enjoy the process and the tremendous personal growth it results into. I ensure my financial backup now gives me 5-6 years minimum to start afresh - if I have to. Do not undervalue the role of money - it definitely makes things easier.
6. This is my favorite quote about Karma. I heard it many years back (and thought it was impractical). Especially useful when I feel I did everything right but nothing works:"Karm karo, fal ki chinta mat karo" (Do your duty without thinking about results)
P.S.: I don't know about others, but I have restricted myself into writing lesser HN comments because it takes quite a bit of time/energy. This one is an act of impulse. How do other entrepreneurs feel about this?
1. Give your employer your 2 weeks/1 month notice (depending on locale). Taking this step immediately is critical because the urgency and shock of the change will force you into being fast and practical about all the subsequent steps.
2. Create a monthly budget for yourself which assumes no income that you are not 100% sure about. So if you have interest from investments or a freelance contract that's a absolute guarantee you can include it. For most people the income side of this budget is going to be low or nothing. Your goal with this budget is to stretch your funds out for 6-12 months. The good news is that in 2017, the principle of geoarbitrage allows you to live on virtually any budget. If you live in the Bay Area your next step is going to be to move somewhere cheaper. On the cheapest end of the spectrum, I'll use Thailand as an example because I live here, you can get a basic apartment in the suburbs of Chiang Mai or Bangkok for $100-$500/mo, your initial arrival can be visa-free, and you'll live on delicious Thai food from a restaurant down the road for a few dollars a day. Network heavily with people in your intended destination before you even arrive because it'll make everything 100 times easier.
3. Now create a business plan for your new entity. The business plan should include a description of the product or service which you're going to market, how you're going to market it, what you're going to charge (start high), and any and all costs of development and operation including your own time. It should include monthly profit/loss projections (you're not allowed to use these projections in your budget, they are goals, not guarantees). The most important thing about your business isn't what product or service you initially offer. Once you have assets and control you can try anything you want. Until then the goal of your business is to make enough income that your assets are growing, no matter what that entails.
If you're leaving the country as a part of this process I would advise forming an LLC and opening a bank account before you go, as these things can be difficult from overseas. You'll be very busy trying to make money and living your dream so you don't want to have to deal with paperwork.
Prepare yourself mentally to work very hard for at least the next 6 months and do whatever you need to do to make enough cash. You will become practical and decisive, and you'll learn many realities about business, such as cash flow is king, very quickly. I got my start being nickel-and-dimed by agencies in India over Elance. It sucked and it was hard and it was 100% worth it.
There are many objections to this strategy which typically stem from risk aversion, or a desire to not worry about money. I would submit that if one objects to the risk, this plan is a personal growth opportunity: it will teach them how to handle stress, plan for contingencies, and so on. If the objection is that they don't want to worry about money, I would point out that money is just a way for people to quantify your value to them, and since no man is an island, there are great personal and financial rewards to be reaped from confronting this objection and discovering what other people truly value about you.
Doing step 1 first and now is the key. If your path brings you through Bangkok let me know and we'll grab a beer! I've seen many people succeed at this and a few fail. Your odds are better than you think.
I believe controlling a massive number of nodes in the network via infection techniques like WannaCry used would open the door for many actual and hypothetical attacks. Please see the Bitcoin Wiki page titled Weaknesses  for more details about attacks involving the control of network resources.
More realistically, a simpler attack would be to go for control of the wallets if you have that kind of access to the infected hosts. However, if an actor had an interest in devaluing Bitcoin, to buy after a crash and sell after recovery perhaps, or just destabilize users' trust and destroy it (states?) then there could be a lot of profit in it I believe. Bitcoin has many competitors and enemies, is this something we should worry about?
I wrote a paper "Eclipse Attacks on Bitcoins Peer-to-Peer Network"  about maliciously partitioning the Bitcoin network. Much of the paper focuses on how to partition the network, but Section 1.1 Implications of eclipse attacks should give a good sense for how Bitcoin's security properties depend on the network not being partitioned.
"Hijacking Bitcoin: Routing Attacks on Cryptocurrencies"  also discusses network partitions and Bitcoin. As with Eclipse Attacks it focuses on both the how and the effects.
Interestingly blockchains built on Algorand  would not fork under a network partition they would just cease to create new blocks until the network is whole again.
: "Eclipse Attacks on Bitcoins Peer-to-Peer Network" https://www.usenix.org/node/190891
: "Hijacking Bitcoin: Routing Attacks on Cryptocurrencies" https://arxiv.org/abs/1605.07524
: "Algorand: Scaling Byzantine Agreements for Cryptocurrencies" https://people.csail.mit.edu/nickolai/papers/gilad-algorand-...
Someone who had 27 bitcoins before the split gets 27 of each type of coin after the split.
Every transaction will be incorporated into one or both of the copies. Some transactions will depend on other transactions, and therefore as time passes, even a small difference in the sets of transactions applied to each copy will snowball into the majority of transactions ending up in only one tree.
There is a vulnerability in the bitcoin design here: Transactions from one partition can be replayed on the other tree at any time, now or the future. If someone sends you coins that only exist on one partition, but they later receive coins to the same address on the other partition, you can steal them by replaying the transaction.
The answers there are mostly right. If left to it's own devices, the fork will be resolved when the country gains access to the network again.
The way it would typically be resolved is that the chain that has done the most work (in a Proof of Work coin) will "win"... in practice this means the one with the longest chain and most transactions.
When this happens, the transactions in the blocks that roll back are likely to be added back to the mempool (in memory list of unconfirmed transactions) in which case they will probably still be added to a block. So for most legitimate transactions they might not notice.
However, there is a problem here. Adding hundreds of thousands of transactions to the mempool on many coins will cause huge problems.
Another problem is if the same output is spent on both forks. Called a double spend. In coins... each transaction has one of more inputs and one or more outputs. Outputs can then be used as inputs to other transactions. Each output can only be used as an input once.
If that happens, the transaction that was on the fork that lost will itself be lost since the network will reject it for trying to spend an already spent output.
Furthermore, if anyone travels from that country and connects to a network outside of it. They will eventually roll back and join the fork on that side of the partition as that partition will inevitable eventually have more "work done" than the one in the partition they left.
Now, if the country never gains internet access again. You effectively have two different coins. But you risk chaos as described above. One possible solution in that scenario is to "hard fork" and have everyone on one side of teh partition install a new blockchain client. Then it's official, they are two separate coins.
(This is phrased in a fashion which Bitcoiners will not appreciate but it is not incorrect. For precedent, see the hardfork around the 0.8 release.)
Assumption 1: a government is able to shut down its entire internet, and block off all electronic communications.
This assumption is fine. There are multiple historical examples of governments of doing this.
When a government does this though, there is no network split. A network split is when you have 2 networks that are cut off from one another. The government "shutting off the internet" does not create 2 networks. it make the population of the country have zero access to ANY network. Which means no split.
Which leads us to:
Assumption 2: A government is able to cut off access to the OUTSIDE internet, while also maintaining an INTERNAL network that can talk to each other, but not talk to the outside world.
This is basically impossible. There are no examples of governments being able to do this in any significant capacity.
Sure, there is some attempted internet censorship in countries like China, but the great firewall is extremely leaky. And even if it were 99% effective, 99% effective isn't good enough.
In order to partition the bitcoin network, you do not need to make it impossible for 99% of the population to get access to the outside world. You need to stop 100%, with no margin for error. This is because as soon as a SINGLE node is able to get access to the outside world, it can rebroadcast the information to all internal nodes.
The block chain is essentially the same as a physical ledger that everyone (collectively) uses to confirm and record all transactions.
If everyone suddenly split (partitioned) in to 2 separate groups with 2 separate ledgers each 'everyone' (now that there are 2) would continue to use the ledger of their group.
I'm not sure if bitcoin makes any arrangements for 'merging' ledgers. My understanding is that among divergent chains the longest chain always 'wins' and any others are considered fraudulent.
So, once the two partitions are re-combined, when individuals reach out to 'everyone' and say "give me the latest version of the ledger" they would find the 2 competing ledgers and should choose to trust the one that is longer.
DESCRIPTION: My first distro was Debian. Then, for a while, I used Arch. But it kept irritating me with its total disregard for backwards-compatibility (symlinking /usr/bin/python to python3), coarse-grained packages (want to install QEMU PPC without pulling in every other architecture as well? too bad!), lack of debug packages (good luck rebuilding WebKit just to get stack traces after a SIGSEGV), and package versioning ignoring ABI incompatibilities (I once managed to disable the package manager by upgrading it without also upgrading its dependencies... and later cut off WiFi in a similar manner). So, when I finally trashed my root partition a few weeks ago, I decided to use the opportunity to return to Debian.
One thing I miss from Arch, though, is having an easy way to create a package. It's simply a matter of reading one manpage, writing a shellscript with package metadata in variables and two-to-four functions (to patch up the unpacked source, check the version, build it, and finally create a tarball), and then running `makepkg`. And it will just download the source code, check signatures, patch it, and build it in one step; it even supports downloading and building packages straight from the development repository. I took advantage of it to create locally-patched versions of some software I use, while keeping it up to date and still under the package manager's control.
Contrast that with creating a .deb, where doing the equivalent seems to require invoking several different utilities (uscan, dch, debuild; though ) and keeping track of separate files like debian/control, debian/changelog, debian/rules and whatever else. All the tooling around building packages seems oriented towards distro maintainers rather than users. I'd love something that would relieve me of at least some of the burden of creating a local package from scratch.
DISTRIBUTION: unstable, I guess
- DESCRIPTION:TL;DR: Debian's web pages are hard to navigate and use and it's very hard to see what's happening.
I contribute to FOSS projects whenever I have time and have been wanting to contribute to Debian, but the difficulty is offputting. I'm used to searching for the program name and arriving at a portal page from which I can easily browse the source, see the current problems and instantly start interacting with the community. Unfortunately, contributing to Debian seems to require in-depth knowledge about many systems and arcane email commands. As a would-be contributor this completely alienates me.
One reason is that Debian has many independent services: lintian, mailing lists, manpages (which btw are fantastic and give me hope), Wiki, CI, alioth, the package listing, BTS, etc. To contribute, you need to learn most of them and For example, searching a package name gives me a page at packages.debian.org, but it's very hard to navigate or even discover the other services from there. I can't easily see if there are any lintian issues, critical bugs or current discussions. Additionally, I find most of the systems very hard to use (I still can't figure out the mailing list archives). Ideally, these services would be more tightly integrated.
Another big reason Debian is very hard to contribute to is the main discussion takes place via mailing lists. I understand that many people enjoy working with them, but for light usage they are a big pain. Submitting and history are in completely different programs, there seems to be no real threading, volume is often high and reading large amounts of emails is a chore to me. A solution here would be an improved mailing list archive with options for replying directly integrated to the site.
- DISTRIBUTION: unstable
- ROLE/AFFILIATION: Student
DESCRIPTION: Any time you do a web search for anything regarding Debian, the search results include a huge amount of official but outdated information. Normally for Linux-related questions I refer to the amazing Arch wiki, but there are topics that are Debian-specific, and then sifting through all the detritus is a huge waste of time. There's a wiki, a kernel handbook, a manual, random xyz.debian.org pages, mailing lists, user forums, the Debian Administrator's Handbook...
Granted, it's a huge effort to clean all of that up, but perhaps there's a way to incorporate user feedback, so that pages can be marked as "outdated" by users, or updated by users (wait, there's a log-in page- does this mean I can edit wiki pages? Did not know that...:( ), or otherwise made more systematic.
In particular, it would be great to have more complete information on the installation process: which images to use (RC, ..., or weekly image?), how to put them on a USB stick (why does my 32GB stick now say it has 128GB?; you mean I can just copy the files to a FAT32-formatted drive?), what the options are (for hostname, is any name, a FQDN necessary?), etc. For every single clarification, there will be a hundred, thousand, ten thousand people who are helped; that seems like a worthwhile investment. Everyone is a beginner at the beginning, regardless of knowledge outside this specific domain, so why not make it easier.
All that said, have been using Stretch/testing for a few years, love it, love the Free/Libre Software ethos, love what you guys do, keep it up, thank you!
There are users who'd like to use a non-corporate community distro but who don't need or want software to be as old as software in Debian stable. The standard answer is "use testing" (e.g. http://ral-arturo.org/2017/05/11/debian-myths.html), but 1) security support for testing is documented to be slower than for stable and unstable (https://www.debian.org/doc/manuals/securing-debian-howto/ch1...) and 2) the name is suggestive of it being for testing only.
Please 1) provide timely security support for testing and 2) rename testing to something with a positive connotation that doesn't suggest it's for testing only. I suggest "fresh" to use the LibreOffice channel naming.
ROLE: Upstream browser developer. (Not speaking on behalf of affiliation.)
Python 3 as default
Just to quote from the packaging manual:
> Debian currently supports two Python stacks, one for Python 3 and one for Python 2. The long term goal for Debian is to reduce this to one stack, dropping the Python 2 stack at some time.
The first step for that would be of course Python 3 as default Python version and I'd like to see that for buster, as Python 3 nowadays offers way more features than Python 2 and should be the choice for new Python projects.
DESCRIPTION: Right now, Debian's default install includes rsyslog, and every message gets logged twice. Once in rsyslog on disk, and once in journald in memory. Let's turn on the persistent journal by default, and demote rsyslog to optional. (People who want syslog-based logging can still trivially install it, such as people in an environment that wants network-based syslogging. But that's not the common case.) This will make it easier to get urgent messages displayed in desktop environments as well.
DESCRIPTION: on distros like arch, to a lesser extent void and even gentoo, writing package definition files (PKGBUILDs, ebuilds, templates) is relatively straightforward; in contrast, i don't even know where to start with finding, editing and building debian packages. i think they're built from source packages but beyond that i have no clue. i think visibility of documentation could help here, if not more radical changes to be more similar to the arch/gentoo workflow.
DESCRIPTION: There have been numerous detailed analyses posted to debian-devel that go through every package in standard and important and list out which ones shouldn't be. However, actual changes have only ever been made here on a point-by-point basis. (I've managed to get a dozen or so packages downgraded to "optional" and out of the default install by filing bugs and convincing the maintainer.) I'd really like to see a systematic review that results in a large number of packages moved to "optional".
This would include downgrading all the libraries that are only there because things depending on them are (no longer something enforced by policy). And among other things, this may also require developing support in the default desktop environment for displaying notifications for urgent log messages, the way the console does for kernel messages. (And the console should do so for urgent non-kernel messages, too.)
DISTRIBUTION: Start with unstable early in the development cycle, so that people can test it out with a d-i install or debootstrap install of unstable.
DESCRIPTION: The license conflict between the open source ZFS and open source Linux kernel mean ZFS needs to be in contrib. Unlike a lot of other packages in contrib, ZFS doesn't rely on any non-free software. It just can't be in Debian main because of the conflict of licenses.
However, it would be nice if there was a way to have a more official path to ZFS on root for Debian. The current instructions require a fairly high number of steps in the ZFS On Linux wiki.
The ZFS On Linux wiki also lists a new initramfs file that has to be included so ZFS is supported. It seems odd that Debian couldn't include that as part of initramfs. I realize Debian doesn't want to necessarily promote non-free software, but this is free software that just conflicts with the GPL. It doesn't seem like it should be a second class citizen where you have to manually include files that should already be part of the package.
By the nature of the license conflict, it will be a second class citizen in that it can't be part of the normal installation package and you'll have to compile on the fly. However, it would be nice if there was a mode in the Live CD that could handle ZFS installation rather than doing it all manually.
DISTRIBUTION: currently mixture of testing/unstable but I'd like to use day(s) old sid (see other post).
- DESCRIPTION: If I installed e.g. postgresql I would prefer it not starting automatically by default. I would rather like a message:If you want x to start on boot, type 'update-rc.d enable x'
- DISTRIBUTION: (Optional) [stable]
- ROLE/AFFILIATION: (software dev, mostly web)
- DESCRIPTION: This is a feature of the guix package manager. From their website:
"Each invocation is actually a transaction: either the specified operation succeeds, or nothing happens. Thus, if the guix package process is terminated during the transaction, or if a power outage occurs during the transaction, then the users profile remains in its previous state, and remains usable."
They also do transactional rollbacks, but I'm not sure how realistic that is for the apt package system.
DESCRIPTION: Long-time Debian user here and free software supporter. One aspect where I don't have any practical choice for free software is my non-free iwlwifi firmware.
It's a huge PITA to install Debian like that when you don't have the fallback of a wired network. You provide "non-free" firmware packages, but these don't have the actual firmware! Rather they're dummy *.deb packages that expect to be able to download the firmware from the installer, which is of course a chicken & egg problem for WiFi firmware.
I end up having to "apt install" the relevant package on another Debian system, copy the firmware from /lib manually, copy it to a USB drive, then manually copy it over in the installer.
I understand that the Debian project doesn't want to distribute non-free firmware by default, but it would be great to be able to run a supported official shellscript to create an ISO image that's like the Stretch installer but with selected non-free firmware available on the image.
DISTRIBUTION: Stable on my server, testing on my laptop.
DESCRIPTION: If you are using Debian, especially stable, you have to put up with outdated packages. This is especially a problem with browsers, although you do include security updates and track Firefox ESR, if I understand correctly. But things like Webkitgtk do not recieve updates, and lack feature and security wise after a while.
I think keeping up-to-date versions and having a stable distribution is not per se a conflict. Stable means to me no breaking changes, no need for reconfiguration when I update. It shouldn't mean frozen in time.
It would be great if certain packages would recieve frequent updates even in stable:
- packages that are not dependencies, have a good track record of backwards compatibility, and are unlikely to break
- packages that have to be updated because of security issues (which I think is already addressed now)
- or because of a fast moving ecosystem - even if it was safe, it is frustrating to use a very outdated browser component. I think many networked packages could fit in this category, e.g. Bittorrent or Tor clients, if there are protocol changes.
I think the situation has improved a lot (https://blogs.gnome.org/mcatanzaro/2017/06/15/debian-stretch...), and it would be great to have a stable basis in future and still have up-to-date applications on top as far as possible.
DISTRIBUTION: stable (but also others)
- DESCRIPTION: Creating a custom remote/local/CD/DVD repo or a partial mirror is simply a nightmare, mainly because package management internals are poorly documented. There are many tools developed to just solve this problem, but most of them aren't actively maintained. Aptly seems like the best right now, but is way much complicated and inflexible.
DESCRIPTION: AppArmor improves security by limiting the capabilities of programs. Ubuntu has done this years ago . I'd like to see profiles for web browsers enabled by default.
I think AppArmor is the right choice of default Mandatory Access Control for Debian because Ubuntu and security focused Debian derivatives like Tails  and SubgraphOS  have already committed to it.
DESCRIPTION: a consensus on the next generation of package management. Please.We have had decades of fragmentation (not to mention duplicated innovation) around the RPM vs DEB ecosystem. Which is why it is still hard for beginners to want to use Linux - try explaining to anyone who comes from a Mac about rpm vs deb vs whatever else. Which is why they would pay for the mac rather than use Linux ("its too hard to install software").
Its not just my opinion - PackageKit (https://www.freedesktop.org/software/PackageKit/pk-intro.htm...) was invented for this reason. So you could have Gnome Software Manager that can work the same on every flavor of Linux. Its time to build this the right way.
You have an opportunity now - but again the camps are getting fragmented. We now have snap (ubuntu/deb) vs flatpkg (redhat) all over again. And pretty strongly divided camps are beginning to form around them. It seems that the new rhetoric is snap for servers and flatpkg for desktops... which absolutely doesnt make sense.
Debian is the place to make this stand - systemd was adopted from fedora despite Ubuntu making a strong push for something else. Debian made Ubuntu adopt systemd. I dont think anyone has anything but respect for that process. Debian 10 must take a stand on this.
stretch made OpenSSL 1.1 the default openssl package. Unfortunately, OpenSSL 1.0 was kept around, since so many things depended on it.
There should now be enough time that a firm stance can be taken toward not allowing OpenSSL 1.0 in Debian Buster.
Once TLS 1.3 is finalized, OpenSSL 1.2 will be released with TLS 1.3 support. Not supporting TLS 1.3 in buster would (in my opinion) make Debian appear less in other people's eyes. That means supporting OpenSSL 1.2, and having three OpenSSL packages (1.0, 1.1, and 1.2) is too much for one distribution.
DESCRIPTION: Many laptops (e.g. Macbook Pro) come with retina screens, but most of us use 'regular' monitors. Even after setting org.gnome.desktop.interface scaling-factor and playing with xrandr, it can be difficult or impossible to get a single external non-retina display set up in the right position and without one screen containing tiny text (or huge text).
Being able to make it work at all, and persist after a reboot, would be great. Having per-monitor scaling in the Display settings panel (or in 'Arrange Combined Displays') would be amazing.
DISTRIBUTION: I've experienced this with jessie. I haven't tried with stretch.
There are users who simultaneously want to get their infrastructural packages like compilers from their distro and want to build fresh upstream application releases from source.
This leads to pressure for Linux apps and libraries to be buildable using whatever compiler version(s) that shipped in Debian stable, which amounts to Debian stable inflicting a negative externality on the ecosystem by holding apps and libraries back in terms of what language features they feel they can use.
To avoid this negative externality, please provide the latest release (latest at any point in time, not just at time of Debian stable relase) of gcc, clang, rustc+cargo, etc. as rolling packages in Debian stable alongside the frozen version used for building Debian-shipped packages so that Linux apps and libraries aren't pressured to refrain from adopting new language features as upstream compilers add support.
(Arguably, the users in question should either get their apps from Debian stable or get their compilers from outside Debian stable, too, but the above still seems a relevant concern in practice.)
100% reproducible packages
While having over 90% of packages reproducible already is awesome, 100% would be even better. The stretch release announcement describes best why:
> Thanks to the Reproducible Builds project, over 90% of the source packages included in Debian 9 will build bit-for-bit identical binary packages. This is an important verification feature which protects users from malicious attempts to tamper with compilers and build networks.
DESCRIPTION: There are a ton of packages in Debian. I sometimes browse through all of the packages looking for some gem that I didn't know about before. It's a time intensive process and I don't have any input into my decision other than reading the description. Sometimes I'll install it immediately. Other times I'll check out the website to see if it's still maintained (or if there's a better alternative). It's all a very manual process.
popcon doesn't fill this void. Popcon tells me what packages are popular across all users. I'm more interested in what a subset of users with similar interests or preferences would install. Or maybe I want to see what it's like to live in someone else's shoes. For instance, maybe I'm learning a new programming language and I want to setup my environment similar to an experienced user so I have all of the popular libraries already installed.
It would be nice if there was a better way to discover packages that are relevant to you. Perhaps you could add this feature as a way of getting people to install popcon? For example, you could say if you install popcon, then it will upload your set of installed packages and make recommendations for you.
If people are able to add metadata about themselves (e.g. I'm an expert Emacs user and I'm a golang developer), then you could use that plus their package list to make recommendations. I could say "show me what packages golang developers tend to install". Or you could say "for someone with a package list similar to mine, find out what packages are popular that I'm missing".
First-class init that is not systemd
I believe it's notorious that systemd is highly controversial, even spinning off a fork called Devuan. It might be more favorable to reunite the community by including one alternative init system that is, fundamentally, a first-class citizen in the Debian ecosystem.
"First-class" implies that the user is given a choice on new installations in a specified prompt. The default should be the option "systemd (recommended)".
buster+1 given the expected effort
Individual and hobbyist system administrator
Currently, it's too hard to report bugs, inspect debian source packages, propose fixes, etc. The overhead to making a simple contribution is too high.Note: this isn't a debian specific issue, many open source projects has old infrastructure.
DESCRIPTION: The wiki is frequently stale or incomplete. A lot of people get information much more readily out of a wiki than mailing lists. Like me, for example :) Mailing lists have a very high latency (often infinite) and can be difficult to search.
For example, say you want to host your own apt repo to hold a custom package; this page is not very clear https://wiki.debian.org/DebianRepository/Setup - how do you choose which of all the software types to use? It's a reasonable software overview, but not great to help people get a repo set up.
Arch has a fantastic wiki that's clear and concise. It's also more readable (mediawiki) than Debian format, though I understand Debian aims to work as bare html for greater device compatibility.
DISTRIBUTION: Primarily Stable, with later sections for non-stable if needed.
Secure Boot in Stable
UEFI Secure Boot Support in Debian.
Debian does not run on systems with Secure Boot enabled.
I work at an insurance company and all of our development computers and most of our servers run debian jessie.
We will probably upgrade to Debian 9 very soon! Thanks for all the hard work on debian Iamby!
EDIT: grammar and formatting
- DESCRIPTION: The installer should offer an option to install a simple WM, like i3 or awesomewm, in the way that there is an option in the minimal installer to install a DE like Xfce or GNOME. Bonus points if you make it aesthetically pleasing to some extent.
- HEADLINE: Kernels in repo which do more than the mainline/default kernel
- DESCRIPTION: I'm thinking of specifically of the patches by Con Kolivas, but any other useful pre-compiled kernels being available in the repo would be great, it would save me having to figure it out by myself and I'm sure there are many who would welcome the availability of pre-patched kernels, better i/o schedulers etc
- HEADLINE: Look into more optimisation (like Solus)
- DESCRIPTION: Solus (www.solus-project.com) does some optimisation on their distro that would be a good-to-have in any other distro
- ROLE/AFFILIATION: Infrastructure programmer for multinational corp
DESCRIPTION: Recently had to reinstall my Debian system for the first time in a while, and was struck by how user-unfriendly the installer still is compared to many of the alternatives. I don't think it's necessarily a problem that it's ncurses, but it could use some more explicit hand-holding. I remember one point where I needed to select some options from a list and there was no indication of what operation was required for selection, for example (I think I needed to hit '+'?). I'm pretty familiar with command lines and curses-type UI's and this was unintuitive for me, I can only imagine how frustrating it might be for a more desktop-oriented user.
I also recall a very confusing UI paradigm where the installer steps are a modal view and there's a hidden 'overview/master menu' you can back out into at any time, and it's not clear to me how those two modes are related and what state it leaves your installation in if you back out and then jump into the installation at a different step.
Generally the explanatory text is quite good at telling you what decision needs to be made, and providing necessary info to research that decision if necessary, but how you make those decisions I think could still be improved.
- DESCRIPTION: Debian is the only distribution that I know of that provides .iso images from which you can install the operating system and subsequently install a wide range of (libre) software. In addition, Debian provides update .isos. These affordances make installing and maintaining a desktop computer without an Internet connection, or with a slow and expensive connection, viable. I hope that Debian will continue to provide this affordance as we transition from optical disks over the next few releases.
- DISTRIBUTION: All Debian distributions.
- ROLE/AFFILIATION: End user (desktop)
Wayland as default display server
X11 is aging, so it's time to switch to Wayland. It'd be cool if buster would ship with Wayland as default display server.
- DESCRIPTION: Debian has been a great source of innovation and leadership within the OSS world. Make the next big move by adopting pledge(2) from OpenBSD to be the first major mandatory security feature on Linux. There is little hassle in making programs use it, and the LOC in the kernel is tiny compared to say SELinux. See  for more details.
- DISTRIBUTION: Any and all!
- ROLE/AFFILIATION: CS program analysis researcher with MIT/CSAIL.
DESCRIPTION: I tested the stretch release candidates in VirtualBox, and while I did eventually get them working, I had to follow the instructions in several bug reports from across both the Debian and VirtualBox probably project websites.
I don't mind following instructions, so if there is a reason why this can't be achieved seamlessly with zero configuration, then I would at least like to see some official instructions prominent on the Debian website.
COMMENT: Debian is awesome, thanks for everyone's hard work!
DESCRIPTION: The #1 reason why I don't use Debian on the desktop is missing wifi support during installation. I wish Debian could write and include free wifi drivers for all recent laptops.
DISTRIBUTION: Debian 8 on the server. Mint Mate on the Desktop.
ROLE/AFFILIATION: Founder and CEO of a tech startup.
DESCRIPTION: When building images (especially container images), there should be a way to only install the bare minimum to make apt work. No init system, no bash, no filesystem utilities, nothing. Even `debootstrap --variant=minbase` is overkill in that regard.
One way would be to create an option for deboostrap that would accept a list of desired packages (similar to pacstrap from Arch) instead of using "--variant".
DESCRIPTION: On rolling release distros there's currently a vim version that ships rust syntax highlighting, rustc and cargo. This is pretty much all you need to get started with rust development. Debian stable currently ships rustc, but lacks cargo, which is rather essential if you actually want to compile your project on a debian server. The vim-syntax thing would be nice to have. :)
- DESCRIPTION: Continue with the values that make debian great. E.g.https://www.debian.org/code_of_conducthttps://www.debian.org/social_contracthttps://www.debian.org/intro/free
- DISTRIBUTION: (Optional) [stable, testing, unstable, or even a Debian deriviative]
DESCRIPTION: At https://fosdem.org , we are using the nginx rtmp module intensively. It seems it is becoming a de facto standard when an in-house streaming server is preferred, as opposed to an external streaming platform. It combines excellently with ffmpeg, the recently pacakged voctomix and several components of the gstreamer framework, to create an excellent FOSS video streaming stack. Some debconf video people too seem to be interested. Some positive interest from Debian nginx pacakagers. Unfortunately, no clear way forward yet.
Hopefully, Buster opening up might create some opportunities to get things going again!
SEE ALSO: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=843777#23
DISTRIBUTION: Debian 10 stable & Debian 9 backports.
ROLE/AFFILIATION: http://fosdem.org staff (= all year round volunteer), responsible for video streaming & recording since FOSDEM 2017
DESCRIPTION:This is a nitpick/wishlist item really. I started using Stretch while in testing, and noticed that most updates would download rather large sets of icons (few MBs). They look like archive files of icons, and I guess that if any change happens the whole set is downloaded again. This wasn't the case in Jessie.
When on a slow Internet link, it can definitely slow down upgrades. It would only be noticeable for Testing/Unstable, as otherwise these sets of icons would not change much. But when regularly updating testing, often these icons sets were a significant part of the downloaded data.
It could be nice to make updating those icons optional, for people behind slow links. Alternatively, handling them as a versioned list (text, easy to diff efficiently) + independent files could make their update more efficient than compressed archive files.
Again, just a nitpick/wishlist item. It's just that I haven't chased down what this comes from (I guess for GUI package management like synaptic? TBC) and don't know where this could be reported. You just gave me the opportunity ;)
DISTRIBUTION: Testing/Unstable (any version with frequent changes)
DESCRIPTION: GCC 6.4 will be released soon (July). I wish Debian will get all the regression fixes that this update will bring (according to the new numbering convention, version 6.4 does not mean new features, so no breaking-changes, only fixes). Same for CUDA 8.0.61 (already available for ~5 months) which is a maintenance update after version 8.0.44, the one available in Stretch. I'm saying this because Jessie never got the latest bug fix release (4.9.4) for the 4.9 series of GCC, not even in the backports (it still offers the 4.9.2 instead). I wish there was a policy that allowed regression fixes from upstream to be ported and with the same priority as security fixes. GCC and CUDA are only examples, the same scheme would be applicable to any other package as well. In my view, this would foster Debian adoption on desktops at a higher level. If this can't be done for the current Debian Stable, I hope my (and other people's similar) concerns will be taken into account in the future. As a developer, I care about this level of support. We all love Debian, we'd just like to make it better. Thanks.
DISTRIBUTION: Debian Stable
DESCRIPTION: Any plans to go ahead and stabilize the dpkg library for buster? Having access to a stable package management library is essential in our software. Ie. being able to verify package signatures and querying the database for files. Both of which are not supported.
Also get rid of all interactivity during install and upgrade. It's deadly for managing big fleets.
- DESCRIPTION: I think there is lots of ways. Things like flatpak look promising but also docker. It would be nice if there where less papercuts when using those things. I also dream about a command named "playground [name]" which instantly gives me a shell where I can try stuff without interfering with anything else. When finished I can just "playground remove [name]". I know that it's possible today but it's a but of a hassle.
- ROLE/AFFILIATION: (software developer, mostly fullstack webdev)
DESCRIPTION: It would be great to have a central keychain where keys (SSH, PGP) could be unlocked on a sessions basis (think of a merge between gpg-agent [who wouldn't scream about being hijacked every other day] and ssh-agent [who wouldn't be shell-specific and able to handle multiple keys without having to manually :
> eval $(ssh-agent -s)> ssh-add /path/to/key1> ssh-add /path/to/key2> ...
As a desktop user, what I would like is, on a session basis, when I first provide the passphrase for a given key (when I ssh into a server from the CLI or decrypt a PGP encrypted email from Thunderbird [with enigmail] for instance) have a keychain securely unlock these keys for the duration of the session (that is, until I lock the screen, close the lid or log out).
Description: More KSP security features enabled by default, perhaps even Firejails pre-installed, Wayland as default along with flatpaks, etc
- DESCRIPTION: I have tried installing debian many times on various machines and have had huge trouble getting the install usb stick to boot properly (or in the end for the bootloader to install) with Debian. Ubuntu installs flawlessly on these machines.
- ROLE/AFFILIATION: (Optional, your job role and affiliation)
DESCRIPTION: In the past I've often ran into stuff in Debian just being too old for my needs. I don't need the bleeding edge, but two years is a really long time. I've switched to Ubuntu a few years ago, but not being a fan of Canonical it would be nice if I could come back to Debian.
ROLE: full stack web developer
- DESCRIPTION: PNG image files use too much space in Debian's source tree; in user's install size; and in Debian's website.
All meta-data that does not affect display should be removed and the file should receive a complete lossless compression run with an optimizing tool.
Just try: find / -name "*.png" 2>/dev/null | xargs -d '\n' optipng -preserve -o7 -zm1-9 -strip "all"
A byte here, a byte there, and then suddenly your system is now several MB smaller and runs actually faster.
Upstream should be made aware of this.
- DESCRIPTION: Jessie had a standard Live CD. While the HTML still refers to this flavor, it is not found on any mirror that I checked for Stretch.
I have to use the live CD to install ZFS on Root. I would prefer to not bother downloading or booting a desktop environment when I don't need one.
I don't know why it was removed, but the name was always strange to me. Name it textonly or expert or something so people don't choose it. Standard sounds like it is the recommended image.
- DISTRIBUTION: Live CD
Using WiFi direct on most debian-based distros is a hassle, requiring a lot of manual terminal work. A GUI in the network section for WiFi Direct would make connections easier and faster.
- DESCRIPTION: Since Debian testing / unstable are often advertised as targeted for desktop usage, they can benefit from some more focus on preventing breakage. I know it's somewhat counter intuitive to expect stability from "unstable" or "testing" variant, but at the same time Debian can benefit from avoiding the stigma of server only distro. Having out of the box robust desktop experience (which is not falling behind) is the goal here.
In the period between Jessie and Stretch, testing had a number of breakages in my KDE desktop. Packages fell out of sync (like KDE frameworks and Plasma packages weren't properly aligned, because some packages were stuck in unstable because of not building on some archs) causing all kind of instability issues. It lately became a bit better, but I think desktop stability can get some more love, especially for the most popular DEs like KDE.
And if neither testing nor unstable fit that role, may be another branch should be created for it?
- DISTRIBUTION: Debian testing / unstable.
- ROLE/AFFILIATION: Programmer, Debian user.
DESCRIPTION: The Linux x32 ABI, for the most part, combines the best of both worlds in x86: the lower memory footprint of 32-bit software (and likewise, 4GiB process limits to go with it) by keeping pointer sizes and data types the same as i386, but still allowing applications to take advantage of the expanded registers, instructions, and features of x86_64 CPUs. For most systems that aren't database servers, this can result in large memory footprint reductions and greater performance as a result. Debian has had an unofficial x32 port for years, that is presently difficult to install and get running.
DESCRIPTION: I know systemd is very controversial, but if we are going to be stuck with it, I would like to see more documentation and examples.
- DESCRIPTION: The last laptop that I bought from Lenovo had a thunderbolt port, and I had to use that port to get 3 x 4k monitors to work. The hardware shipping with non-functional firmware. The only way to upgrade the firmware was by booting Windows. I was not sure if there were other devices with old firmware, so I spent hours waiting for a full OS upgrade. Dell was working on a thunderbolt firmware loader at the time, not sure if they released it by now.
Similar situation with the AMI firmware security issues (CVE-2017-5689). The only way to upgrade (afaik) is by running a particular windows installer.
It seems really dumb having to buy a throw-away drive just to be able to boot windows to upgrade firmware. Obviously, I understand this at the feet of the hardware vendor. I was going to suggest pre-installed Debian, but Lenovo will ruin that with pre-installed crapware.
- DISTRIBUTION: stable
- ROLE/AFFILIATION: entrepreneur
DESCRIPTION:It would be great if Debian finished its LXD (container hypervisor)packaging and got it up to a decently complete level (per Ubuntu).
- DESCRIPTION: I would absolutely love a well supported container system for running testing/unstable in a container. I feel that docker requires a lot upfront work with mixed results.
We often develop software using packages of the next debian version (such as Python 3.6) and these packages aren't always available in backports or otherwise outside of testing, in these cases it would be really nice to easily boot up this software in a container.
- ROLE/AFFILIATION: Lead Product Developer at Cetrez
DESCRIPTION: There are a few Debian meta packages but they are really broad. Example: it would be great if there were a few developer leaning packages grouped into one meta package.
For instance, I always install etckeeper, apt-listchanges and apt-listbugs. I think anyone following testing or unstable would want to install those and I'm not aware of any real alternatives to those. I can't imagine using unstable without apt-listbugs to warn you when there high priority bugs in the packages that were already uploaded.
DISTRIBUTION: mixture of testing/unstable.
DESCRIPTION: It is often recommended to separate the OS partition from the users data partition containing /home. This should be available as an easy option for non IT users. If 1 partition exists, a recommended split MB size is is default. If 2 partitions exist, they are checked for OS files and home files, so the user sees which one will be overwritten. This is convenient and a safety net for most users, and a lifeline for non IT people who may not know the recommendation, or how to proceed.
- DESCRIPTION: It would be nice if Debian testing freeze is delayed until an enough stable version of gtk4 is included in testing (and thus eventually in next stable).
DESCRIPTION: Debian unstable still has elixir 1.3.3. It looks like the "official" path forward is to add Erlang Solutions as another apt repository and install packages from there. However, this feels wrong to me as a user. I want to get packages from Debian.
I can't remember which distribution it is, but IIRC one of the other ones has developers upload builds from their personal machines and they are signed with GPG. I don't like this because it is opening yourself up to problems. Perhaps someone uploads a malicious binary build. Or perhaps their developer machine is compromised and someone else uploads it for them or infects their upload.
All of this would go away with 100% reproducible builds in Debian and when it builds on Debian infrastructure. That's not the case when Erlang Solutions is setup as the provider.
I realize this is a minor point as few people will install it, but I was surprised that other distributions include the latest Elixir but Debian does not. The latest is 1.4.4 and I couldn't find anything related to 1.4.x in the upload queue or bug reports. It seems like the package maintenance has been outsourced to Erlang Solutions.
Description: in debian installer you can chose a few standard setups. The default options are a bit crazy to me and also i miss alot of pkgs by default.
A bit of cleanup would be nice (iirc you can select database server for example, that ll give you my/maria)
It would be nice if you yould specify a code like "xxx/yyy" that would resolve to a public repo of predefined templates in which you could also define your own.
I for one, would define a server, workstation and laptop setup.Server setup would includr sshd, screen, etc
DESCRIPTION: something like 'apt-get deps <package>' returning a list of all deps for a package. This would be super duper when trying to install a standalone package file on a system where the deps aren't already present.
- DESCRIPTION: This request might not be considered in a short term or never be considered, but personally I hope that can be done.
For Desktop, I wish there exists Debian defined environment or interfaces which transparently integrates with desktop environment like power manager. So when switching between different, for instance, desktop environment or window manager, I don't need to tune for specific setting (particularly non-Debian way) in order to get it work.
For Kernel, I would like to see integration with seL4.
- ROLE/AFFILIATION: Software Engineer
DESCRIPTION: Please disable pcsprk by default :-)
DESCRIPTION: installing Debian should be a straightforward process for average Joes and Jannes, that's not the case currently. The process to acquire the proper ISO and have it on a bootable USB stick/SD card is overly complicated (because the information is hidden, missing or incomplete).
As an average Joe, when you visit debian.org there is no obvious place to click to get the latest stable ISO. The default (in a tiny box in the upper right corner on the homepage) is a net-install ISO. net-install are sub-optimal for users who require special firmware for their network card (dvd-1 amd64 should be the default).
You should consider that the default install process for most desktop users will consist of installing Debian from a USB stick on an amd64 system. Once the the right ISO is properly put forward, you should provide clear and detailed info on how to properly transfer the ISO to the USB stick and make it bootable.
Etcher is a free/libre, cross-platform, user-friendly, straightforward GUI (over "dd" iirc) that takes care of the process of making a bootable drive. It should be promoted and made part of the install doc.
Same goes for SD-card installs, many single-board computer enthusiasts (who are not necessarily tech savvy) renounce trying to make a bootable SD card themselves and simply buy a pre-installed one. Simply because the information isn't provided in a straightforward fashion on Debian website and they are not offered with a relatively simple process .
no, using "dd" from the CLI isn't simple: as a Joe you must care about many concepts that are un-obvious (wait what does it mean "the volume is mounted" ? how do I unmount it ? how do I identify the proper volume ? fuck I unmounted the drive, it won't auto-mount anymore ! file-system ? what are you talking about ? MBR ? DOS-compat...)
ROLE/AFFILIATION: electronics engineer, based in Europe, involved in local initiatives to promote free software (LuGs, crypto parties, hacker spaces,...)
Thank you for your awesome work, I wouldn't be involved in promoting free/libre operating systems if it wasn't for Debian (a great community project that cares for users rights/freedoms and provides an overall simple desktop experience).
- DESCRIPTION: XD isn't a rewrite of LXC, in fact it's building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers.It's basically an alternative to LXC's tools and distribution template system with the added features that come from being controllable over the network.
- DISTRIBUTION: Stable
- ROLE/AFFILIATION: Enthusiast and wanna be developer
Instead of pinning to, say PHP 7.1.5, pin to 7.1 and stop backporting fixes. It's okay to have 7.1.6.
Describtion: Debian should have easy usability to set the desktop theme to a light color theme. Right now it is quite difficult for users to change desktop look and feel. Please also make usability testing of changing desktop settings. The current color scheme which is dark does suit all users. A dark and light theme should more users covered.
Many thanks to all the Debian developers for creating a great distribution!
No systemd (and pulseaudio if desktop) for me.
DESCRIPTION: Personally I'd like something like 'apt-get update --local' which pulled down a remote copy of every repo. That's be super handy for something like a build machine, and it'd reduce the need to install & maintain an Aptly repo.
DESCRIPTION: I think I represent a number of users. We want to use unstable as a rolling distribution, but we don't want to run into every edge case. Testing doesn't update fast enough and doesn't have as good of security. There's no middle ground between absolute bleeding edge and the too conservative testing.
I used to use unstable but there's that annoying race condition where I could upgrade at the exact wrong time when brand new (broken) package versions were uploaded and not enough time has passed for even the first round of bugs. I'd like a day safety buffer so apt-listbugs has a chance to warn me about catastrophic bugs.
Setting up a true rolling distribution may be too much work for Debian. Actual Debian developers will be running unstable. It would be nice if there was a middle ground for non-Debian developers who want a rolling distribution but don't want to get hit by every edge case in sid.
I think a nice compromise would be to cache the sid packages for a day (or two) and set that up as another branch. A full day of possible bug reports from people on bleeding edge sid would give us a chance at missing the catastrophic edge cases while still being very current.
I think this could encourage more Debian developers. If I wanted to join Debian as a DD, I would need to have an unstable installation somewhere. It wouldn't be my daily driver because I don't want to run into those breaking edge cases. If my daily driver was day old sid, I could have another machine / VM that runs sid and would almost be identical to what my daily driver is running. It's not like testing where packages could be entirely different due to the delay in migrating.
Unlike testing, day old sid would migrate all packages even if there are release critical bugs. There would be no waiting period beyond the strict day limit. If there is a catastrophic edge case, people already on day old sid using apt-listbugs would be able to avoid it. New installations would hit it but you could warn users (see below).
If you make apt-listchanges and apt-listbugs as required packages for day old sid, then people could be informed about what broke on the previous day.
It would be nice to integrate apt-listbugs into an installer for day old sid and fetch the latest critical or high priority bugs before the installation. A new user could then decide if that's a good day to install. Or you could have a simple website that says here's the day old sid installer and these packages currently have critical or high priority bugs. If you would install those packages, maybe wait another day or two for it to settle down.
Maybe day old sid is too close. Perhaps 2 day sid or 3 day old sid? I don't feel that testing fills this role already because testing waits for 2-10 days and won't update if there are release critical bugs. I'm fine with something closer to bleeding edge sid, but I'd really like to allow a few days for the bleeding edge users to report bugs so I can decide whether to upgrade. I don't have an expectation that day(s) old sid is more stable than testing or less unstable than sid. All it provides is a buffer so I can get bug reports and make my decision about whether to upgrade.
DISTRIBUTION: day old sid.
- DESCRIPTION:Tool to log process spawns, kills, network connection start/stop, file modifications etc. onto event logs for review.
- DISTRIBUTION: Kali
- ROLE: Security Analyst
SELinux installed by default
Not sure what else to say...
DESCRIPTION: Systemd is creating far more issues than benfits. Everyone knows it except for its author, L. P. Still Debian has chosen to go down this road, and the result is that people had to fork and to move to Devuan. Go back to a sane, simple, stable init system. This is expecially true for a server-oriented distribution.
ROLE: Fabio Muzzi, freelance linux sysadmin since 1995, loyal Debian user up to Debian 7, now Devuan user and supporter.
DESCRIPTION:In my use cases, which I think are common, I want a stable base operating system and user interface, but for the applications I work with every day (browser, compiler, office suite, etc.) to be cutting edge.
My dream is to separate packages into two tiers with different update policies, similar to the Android and Apple app stores, and for that matter BSD ports. Platform software like the kernel, system libc, X11, and desktop environments release and update like stable. "Apps" like Firefox and LibreOffice are easily installed and updated on a rolling basis.
I know that I can achieve this now with a custom backports and apt pinning config, but that's more of a low-level project than I'm envisioning. My request is for something that's more of a newbie-friendly point-and-click sort of thing.
DESCRIPTION:For many years I've been fond of Debian and have used it for side hobby projects. But I've had to use Ubuntu and Fedora for real work because I need a modicum of certainty about the intervals between releases.
I acknowledge that Ubuntu's rigid release-every-6-months, LTS-every-24 is impractical for a volunteer project with high standards. But without any firm timeline it's impossible for me to plan and use Debian in production.
For example, a commitment that releases will always be spaced somewhere between 6 and 24 months, would go a long way.
You can even program jscript on server side with asp, or execute standalone with ActiveScript, even control native GUI like customizing your folder, the browser can be morphed to the file explorer. You can make apps with few kb of jscript unlike 55MB electron install bundle.
The Windows help files (CHM) are like thousand years better than macOS counterparts and linux man files. CHM was the de facto ebook format back then and it works really well with features like indexable topics and full text search. We now have to use devdocs.io or dash.
yes it has its quirks and worms, but it was way ahead of its time.
3D Printing - This is going to be the main way to manufacture things in the future. The lab that is 3D printing houses with concrete. That makes me terrified for home values going forward. It will likely shift all the value into the land. The house will just become something you tear down and reprint every 10 years.
CRISPR - s/shitty gene sequence/perfect gene sequence/g That's insane. It's like an anti-virus product for the body (irony intended). We're going to live a very long time and be practically disease free pretty soon. I'm planning on living until 150 (27 now). It's placing a big bet on medical science, but I feel like we're on the edge of some huge things.
Neuralink - Develops high bandwidth and safe brain-machine interfaces. (https://neuralink.com/)
Magic Leap - Mixed Reality (https://magicleap.com)
Crispr-Cas9 - A unique technology that enables geneticists and medical researchers to edit parts of the genome by removing, adding or altering sections of the DNA sequence. (https://en.wikipedia.org/wiki/CRISPR#Cas9)
This is a great question. The acceleration of technology has made it important for entrepenuers to look further ahead than ever when deciding where they want to make their impact in the world. Tomorrow's successful leaders in business will be the ones that peered into the most obscure places of the future to find it's problems and it's solutions.
What was 20 years ahead of its time then? What would you have looked at and thought "That'll be massive in 20 years"?
About the only thing I can think of is VR. Which Sega tried to launch in the late 90s, and only now is selling over a million units.
I want to believe.
It's a clock. A physical clock. Designed and built to run, accurately, for 10,000 years without human intervention.
People can do it but they prefer living the way they do, which is what is causing the problems, knowing in principle that they should change their behavior but not actually doing so.
Miami flooding more and more is not enough of a burning platform yet. Nature will provide it if we don't choose to change ourselves.
His Digital Monetary Trustshttps://en.wikipedia.org/wiki/Digital_Monetary_Trust
The End of Ordinary Moneyhttps://www.memresearch.org/grabbe/money1.htm
Cycan artificial intelligence project that attempts to assemble a comprehensive ontology and knowledge base of everyday common sense knowledge, with the goal of enabling AI applications to perform human-like reasoning.
The project was started in 1984 by Douglas Lenat at MCC and is developed by the Cycorp company. Parts of the project are released as OpenCyc, which provides an API, RDF endpoint, and data dump under an open source license.
Prolog, Backward chaining, forward chaining, opportunisitic reasoning.
1: VR, self driving vehicles, nuclear fusion, artificial photosynthesis, quantum computers, robots that can manipulate things like men, wave energy harvesting, colonize mars, cure cancer, cure Alzheimer's disease.
2: no idea!
3: drones, deepmind, blue led, electric sports cars, flyboards, voice activated assistants, smart wearables...
Cryptocurrencies. 3d printers.
I am being sarcastic. But it's very hard to see, today, any technology that could make my life significantly better (at least than fixing climate change).
I could see this happening within 20 years, but not in the confines of the current project.
* No accounts, no passwords, just secret keycaps
* Instead of messy and complex role-based tables, capabilities always know exactly what they are capable of doing
* No more confused deputies
* Fine-grained trust
Does that count?
I track open positions from time to time on next website https://blockchain.works-hub.com/.
There are a few others.
Angel.co has Remote OK as a job search parameter
I'm one of the engineers, and can honestly say it's a great company to work for.
I don't know if they have any openings currently but definitely worth checking them out in my opinion.
Personal experience is local jobs turn into remote positions easier than trying to apply against 1000s of others.
I'd be interested in looking into any examples of searches where the results aren't good enough or where it seems to have gotten worse recently.
As far as I know there haven't been any changes over the past few weeks that would have made things worse.
It's not an easy or memorable name at the moment, and branding matters.
I used DDG a while when it was introduced. But returned back to google since the results were no as good.
But recently I was feeling googles results has got a lot worse and gave DDG another try.
Big difference! Like Google vs Yahoo back in the days.
Now DDG is my default search.
I guess my estimated worth to Google must be fairly low because I don't click on many ads and I often use a work VPN.
I'm in the UK and noticed I often seem to be getting US centric results and have to try using Google more often.
Edit: ddg has been my default for 2 years.
I was hoping to build an extension for DDG a few months back, but things seemed to have changed in the forum.
This could explain why we are seeing changes.
- Throw away self driving cars for now. The tech will become commoditised. Almost everyone at YC in November was doing self driving motorcycles (I have no idea why either).
- I'd closely align Uber with consumers and environmental groups rather than falling in with taxi industry corruption. lobbying etc. Make cities change laws to benefit their citizens: let ride sharing exist so they can get picked up in rough neighborhoods (PS, abandon tipping, it breaks this), allow Uber cars in public transport lanes (because they are public transport), make sure ride sharing has dedicated space at the airport. Be tough to local governments when you need to be tough, but better yet, have consumers be tough for you. Expose the risks that cities like Austin have put consumers in by replacing Uber with Facebook groups of strangers. Expose cities like London where the normal black cabs frequently illegally refuse to pick up passengers and the mayor wants to 'protect' them because they're 'historical'. Ride sharing is for everyone.
Or, give up on the model of turning drivers into serfs for the benefit of privileged hipsters and focus on more mass-transit solutions.
2. If I survived the previous activity, then having already once been assigned the position of CEO in a company destroyed by the previous management, I'd be as transparent as I could to all interested parties (that would include more than the investors) about the challenges and opportunities.
3. If I was still kicking after that, I'd implement (many) of the two dozen pages of notes I took from being an Uber driver for a year to see what it was about. I like talking with (a variety of) people, and a "taxi driver" is like a bartender in the natural sharing of thoughts for many passengers (who are poor to rich, small to large business people, ranging from unknown to runway-model-famous people).
4. Passengers would not be feeling like they were guessing about a) the fare, or b) the quality of the car, or c) the quality of the driver.
5. Drivers would not be treated as third-class citizens.
Get every single employee involved. Have a very big summit with the single goal: creating the culture that Uber should aspire to be, and coming up with a distinct plan for how they're going to get there. Engineers like to solve problems, and it's clear there's a big one here that has been identified.
Once we have our goal of who we want to be as a company, there will need to be continual work to make sure we're still aligned on that goal, aligned on that culture. There will be people who need to leave, by their own choice or not, based on whether they want to and can be part of that change.
Probably worth hiring Fowler to be part of it, if she's willing.
Step 2: Cut some losses.
This Uber/Waymo thing? It's time to settle. It's clear that even if somehow Uber is innocent in all this, we're not going to win the case in court. Come to Google and say "We're sorry we let this happen. We want to be better than that. Waymo and Uber have the same goals in mind, so let's work together." It'll be expensive, but it'll be cheaper than never having self-driving cars.
Step 3: Plan.
Come up with 1, 3, 5, and 10 year plans. Where does Uber intend to be at each of those milestones? How do they relate to each other? On what day is Uber profitable? How does Uber stop the bleeding? And are these milestones achievable while still meeting the cultural goals from step 1? If not, come up with better goals. If I can't find a way to profitability without meeting the cultural goals, I step down and let a better leader step up.
Uber board, let me know when you're ready for my bold and inspiring leadership.
2) Refocus on the core product. Drop the tipping, convert more drivers through incentives or slightly bumping fare payouts on the backend. They can pull some of the cash they'll save from dumping self-driving tech on this. People won't use competitors if they don't have drivers and the fares are higher
3) They should really consider having a few real employees in hot markets that screen first time drivers and their cars. I've had a few drivers show up in vehicles that technically met Uber's standards, but were really shitbuckets. Thats not the image Uber wants to present.
I'd probably try to rebrand as the new, legal Uber and start spending money on lobbying local politicians. The "taxi-app" market has almost no switching costs and so Uber has almost no pricing power. They'll need to fix this to ever run a profit. Effectively Uber exists because it broke the laws that created barriers to entry. If it wants to continue to exist it needs to erect new barriers that protect it and keep out competition.
How to build loyalty? Create a rewards/milage program similar to airlines. Most business travelers commit to one airline because of the status and perks they receive. Some Uber perk ideas include:
1. priority response during high demand
2. free "upgrades" from UberX to higher class vehicles when demand/wait time allows
3. partnership with airline lounges to get access when traveling
2. Cut costs. Reduce headcount, relocate (out of SF/SV) or offshore dev work. Spin off or sell off expensive ventures, like self-driving research -- there are too many companies working on self-driving, it's not smart to compete with them in-house. Partner with one instead.
3. Increase revenue and maximize rider capture.
- Reduce fare subsidies in metros with lower competition, but offer a loyalty program for riders.
- Institute Expand high-margin LoS like UberRUSH, the courier service. Forge contracts with surburban/exurban governments to provide transportation services; try to absorb government subsidides.
- Monetize data collected during normal operations: partner with market research firms, expand incidental mapping operations to reduce reliance on external maps.
- Deepen partnerships with automakers, and make preferred partnerships with sites that are frequent origins and destinations.
- Think about usecases: not just cab-hailing in one's hometown, but offering safe passage to high-profile sites in foreign metros while one is travelling. Make people choose Uber for the same reasons they discretionarily choose another brand: consistency of experience, trust, and perks; i.e. don't compete solely on price. Make partnerships that support these use-cases.
Companies to be wary of: Google, Amazon, and automakers with whom they have no pre-existing relationship.
Companies to court: HERE Maps (owned by Volkswagen, BMW, and Daimler); hyperlocal providers like FourSquare, Snapchat; car-sharing companies like Zipcar; arch-rivals of their competition like Facebook, Walmart (!); AirBnb and hotel chains.
Then I would raise fares and remove tipping. Improve the customer experience.
Finally I would hire more local employees to monitor on-the-ground operations like drivers, cars, and service. Set up a mentoring program to help drivers get started and stay on. No one is fooling anyone by treating them like contractors. For customers, drivers are the face of the company. I'm brought to mind of Disney's "cast member" concept. The lowest employees on the totem pole are the ones your customers see the most, invest in them.
2. Ask Chris Sacca to come out of retirement and to get more involved in Uber's strategy.
3. Find a COO like yesterday.
4. Shut down Uber eats and Uber rush and double down on self-driving initiative.
5. Make the team leaner, more agile and eliminate 1s layer of management (engineering managers, etc.). Dev teams should be %100 autonomous driven (not managed) by product managers.
6. Eliminate Tips.
7. Pay Uber drivers way more and have them sign a special contract (can't sign-up with other competitors).
8. Increase passenger engagement during a ride by providing location-based deals, events, etc. A twitter-like app to use while riding an Uber.
etc... this is just a short list.
I would have run a management buyout (this is why the smear campaign started, right? 70 billion is way too expensive, and valuation needed to be put down fast, while making plausible reasons for that congratulations, it worked).
Then, I'd have fired half of those 14k employees (really? for a startup?), and finally pivoted by rolling out their geo-matching API for everybody to run their own businesses and services on it (they should have it for like 2 years already). The first one who will provide something like AWS for shared economy wins!
Acknowledge the reality that Uber will not conquer the world and is not worth seventy billion dollars by taking a down round (assuming they need to raise money).
It's been my experience that the people on the ground know what's wrong with the company, they just don't have the authority or vision to do anything about it.
Then look for the most common threads and how to tackle those.
Naive, I know. But it's what I would do.
I think they should:
* Make sure HR problems are actually resolved internally. Its not a big problem to solve I think.
* Try to grow in more towns and countries aggressively.
* Use the economics of scale to bring down the costs for drivers so they can keep the cost down without paying from VC money. For example:
Buy insurance in bulk for their drivers,
Contract repair shops for their drivers,
Even buy cars in bulk and lease them out to drivers.
Since 180 days really isn't enough time to onboard a COO or CFO, I would only focus on damage control. I would need those executives to help curb the burn rate.
Before I accepted the job, I would accept that there is a very high probability that this will end badly and that my reputation will never recover. I would wonder whether the VCs who were instrumental in bringing me in would stand by me when this goes to hell, or would I take the lion's share of the blame?
So, I would make sure that my compensation from Uber was properly invested so that I could survive the rest of my life if I could never get another job. And, I would work with a very competent money manager to optimize every single dollar that I would get from Uber.
If Uber is going to reach a liquidity event, it needs some highly competent executives who can take leadership of their respective areas. At this point, Uber badly needs a CEO/COO/CFO troika that can work together. Therefore, my first real step as CEO would be to help the board recruit solid AAA+ players for the two remaining positions.
While recruitment was ongoing, I would undertake two major initiatives in tandem. First, I would conduct my own investigation into what happened and learn as much about the previous culture that I could. There is a very high probability that this investigation would lead to a new round of firings. Therefore, the second initiative would be to be completely transparent and wholly public about what is going on. I have to assume that every single thing that I would do would be heavily scrutinized by the media, investors and stakeholders. Therefore, I would get in front of it and send weekly emails to employees/investors that were cross posted on a blog. If journalists want to see fire, they can see the same fire that I see. In a news vacuum, it gets more tempting to print dubious sources who may or may not actually be telling the truth.
Once I had a solid grasp on what happened (and once I knew that the bad apples were all promoted to Uber customers, or maybe drivers), I would start fixing the culture. Hopefully, by this point, we would have at least a COO on board. With her help, I would make sure that HR was fully independent and powerful. Simply put, the new Uber would have a strong 'no asshole' rule.
SDC be damned....Uber drivers need to be promoted to first class citizens within the Uber ecosystem. Once I was fairly confident that the culture now marginalized assholes, I would work closely with drivers. I would argue that Uber drivers have an incredible understanding of all the efficiencies and inefficiencies within Uber's market. Therefore, we need to empower drivers and encourage them to come forward with any suggestions to make their jobs better. I argue that Uber is closely tied to their drivers. As drivers succeed and make money, Uber should succeed and make even more.
In 180 days, sadly, I don't think I would have the chance to put a true focus on rider demands. In fact, I'm not even sure that I would have time to engage the drivers. But, these five steps are my perfect case.
phireal@pc ~$ ls -1 Box/ - work nextcloud Cloud/ - personal nextcloud Code/ - source code I'm working on Data@ - data sources (I'm a scientist) Desktop/ - ... Documents/ - anything I've written (presentations, papers, reports) Local@ - symlink to my internal spinning hard drive and SSD Maildir/ - mutt Mail directory Models/ - I do hydrodynamic modelling, so this is where all that lives Remote/ - sshfs mounts, mostly Scratch/ - space for stuff I don't need to keep Software/ - installed software (models, utilities etc.)
phireal@server store$ ls -1 archive - archived backups of old machines audiobooks - audio books bin - scripts, binaries, programs I've written/used books - eBooks docs - docs (personal, mostly) films - films kids - kids films misc - mostly old images I keep but for no particular reason music - music pictures - photos, organised YYYY/MM-$month/YYYY-MM-DD radio - podcasts and BBC radio episodes src - source code for things I use tmp - stuff that can be deleted and probably should tv_shows - TV episodes, organised show/series # urbackup - UrBackup storage directory web - backups of websites work - stuff related to work (software, data, outputs etc.)
Currently reconstructing the entire thing to production spec, as an AWS AMI, perhaps later polished into a personal knowledge base saas where the cleaned and sorted content is public accessible with REST/cmis api.
This project has single handedly eaten almost a third of my life.
. Desktop Downloads Google Drive // My defacto Documents folder legal library // ebooks and anything else I read ... Downloads Sandbox // all my repositories or software projects go here Porn // useful when I was a teen, now just contains a text file with lyrics to "Never Gonna Give You Up"
- music: Musicbrainz Picard to get the metadata right. I've been favoring RPis running mpd as a front-end to my music lately.
- movies/TV: MediaElch + Kodi
I don't have a good solution for managing pictures and personal videos that doesn't involve handing all of it to some awful, spying "cloud" service. Frankly most of this stuff is sitting in Dropbox (last few years worth) or, for older files, in a bunch of scattered "files/old_desktop_hd_3_backup/desktop/photos"-type directories waiting for my wife and I to go through them and do something with them. Which is increasingly less likely to happensometimes I think the natural limitations of physical media were a kind of blessing, since one was liberated from the possibility of recording and retaining so much. Without some kind of automatic facial recognition and taggingand saving of the results in some future-proof way, ideally in the photos/videos themselvesthis project is likely doomed.
My primary unresolved problem is finding some sort of way to preserve integrity and provide multi-site backup that doesn't waste a ton of my time+money on set-up and maintenance. When private networks finally land in IPFS I might look at that, though I think I'll have to add a lot of tooling on top to make things automatic and allow additions/modifications without constant manual intervention, especially to collections (adding one thing at a time, all separately, comes with its own problems, like having to enumerate all of those hashes when you want something to access a category of things, like, say, all your pictures). Probably I'll have to add an out-of-band indexing system of some sort, likely over HTTP for simplicity/accessibility. For now I'm just embedding a hash (CRC32 for length reasons and because I mostly need to protect against bit-rot, not deliberate tampering) at the end of filenames, which is, shockingly, still the best cross-platform way to assert a content's identity, and synchronizing backups with rsyncZFS is great and all but doesn't preserve useful hash info if a copy of a file is on a non-ZFS filesystem, plus I need basically zero of its features aside from periodically checking file integrity.
- bin :: quick place to put simple scripts and have available everywhere - build :: download projects for inspection and building, not for actively working on them - work-for :: where to put all projects; all project folders are available to me in zsh like ~proj-1/ so getting to them is quick despite depth. - me :: private projects for my use only - proj-1 - all :: open source - proj-2 - client :: for clients - client-1 - proj-3 - org :: org mode files - diary :: notes relating to the day - 2017-06-21.org :: navigated with binding `C-c d` defaulting to today - work-for :: notes for project with directory structure reflecting that of ~/work-for - client - client-1 - proj-3.org - know :: things to learn from: txt's, books, papers, and other interesting documents - mail :: maildirs for each account - addr-1 - downloads :: random downloads from the internet - media :: entertainment - music - vids - pics - wallpaper - t :: for random ad-hoc tests requiring directories/files; e.g. trying things with git - repo :: where to put bare git repositories for private projects (i.e. ~work-for/me/) - .password-store :: (for `pass` password manager) - type-1 :: ssh, web, mail (for smtp and imap), etc. - host-1 :: news.ycombinator.com, etc. - account-1 :: jol, jolmg, etc.
Other things are better sorted by category or topic. For tools or programming languages I'm researching I might have a directory with items "01_some-language", "02_setup", "10_type-system", "20_ecosystem", etc.
~/dev for any personal project work
~/$COMPANY for any professional work I do for $COMPANY
~/teaching for teaching stuff
~/research for academic research (it's a big mess unfortunately)
~/icl for school related projects (where "icl" is Imperial College London)
For my PDFs I use Mendeley to organize them and have them available everywhere along with my annotations.
I store my books in iBooks and on Google Drive in a scheme roughly like: /books/$topic/$subtopic
Usually organizing your files is usually just commitment, move files off ~/Downloads as soon as you can :-)
Web sites are
sitename/ info - login data for site, domains, etc. site - what gets pushed to the server work - other stuff not pushed to server
As for all "working" documents, they're local to my machine under a documents or project folder. The documents folder is synced to all my devices and looks the same everywhere with a similar organization structure as my external drive. My projects folder is only local to my machine, which is a portable, and contains all the documents needed for that project.
TL;DR Shallow folder structure with dates at the beginning of files essentially.
- /x/src contains all Git repos that are pushed somewhere. Structure is the same as wanted by Go (i.e., GOPATH=/x/). I have a helper script and accompanying shell function `cg` (cd to git repo) where I give a Git repo URL and it puts me in the repo directory below /x/src, possibly cloning the repo from that URL if I don't have it locally yet.
$ pwd /home/username $ cg gh:foo/bar # understands Git URL aliases, too $ pwd /x/src/github.com/foo/bar
- /x/bin is $GOBIN, i.e. where `go install` puts things, and thus also in my PATH. Similar role to /usr/local/bin, but user-writable.
- /x/steam has my Steam library.
- /x/build is a location where CMake can put build artifacts when it does an out-of-source build. It mimics the structure of the filesystem, but with /x/build prefixed. For example, if I have a source tree that uses CMake checked out at /home/username/foo/bar, then the build directory will be at /x/build/home/username/foo/bar. I have a `cd` hook that sets $B to the build directory for $PWD, and $S to the source directory for $PWD whenever I change directories, so I can flip between source and build directory with `cd $B` and `cd $S`.
- /x/scratch contains random junk that programs expect to be in my $HOME, but which I don't want to backup. For example, many programs use ~/.cache, but I don't want to backup that, so ~/.cache is a symlink to the directory /x/scratch/.cache here.
Outside of that scope, my files reside randomly somewhere in the ~/Documents folder (I use a mac) and I rely on spotlight to find the item I need. It's not super great but is workable often enough.
It's not a silly question!
edit: I've been trying to find a multi-disk solution and haven't had much success with an easy enough to use tool. I use git-annex for this and it helps to some extent. I've also tried Camlistore, which is promising, but has a long way to go.
Non Golang code will go to ~/code, sometimes ~/code/company-name but I also have couple of ad hoc codebases spread around in different places on my filesystem.
So it is a bit disorganized. However last few years I have rarely ever needed to cd outside of ~/code/go.
Some legacy codebases I worked on (and still need to contribute to from time to time) can be in most random places as it took some effort and time to configure local environment of some of these beasts to be working properly (and they depend on stuff like Apache vhosts) so I am too afraid to move those to ~/code as I might break my local environment.
Filename preserved, ordered by date or grouped in arbitrary functional folders
YYYY.AlbumName (Keeps albums in date order) AlbumName Track# Title.mp3 (truncates sensibly on a car stereo)
YYYY-MM-DD.Event Description (DD is optional)
scripts - reusable across clients
source code documents
I use Beyond Compare as my primary file manager at home and work. Folder comparison is the easiest way to know if a file copy fully completed. Multi-threaded move/copy is nice too.
So, no organization (the ocd part of me hates this) but i always find my files in an instant, no matter where i left them.
~/$MAJOR_TOPIC | |--- ./$MORE_SPECIFIC | |--- ./$MORE_SPECIFIC | |--- ./general-file.type | | ./general-file.type | |--- ./$MORE_SPECIFIC | |--- ./general-file.type
As you find yourself collecting more general files under a directory that can be logically grouped, create a new directory and move them to it.
Also keep all your directories in the same naming convention (idk maybe I'm just OCD)
This is a directory that can be emptied at any moment without the fear of losing anything important, and which help me keeping the rest of my fs clean. Basically `/tmp` for user.
I also recommend calibre for e-books, but I never got to the "document store" stage that I think some people have.
-Language/technology -specific research case
Edit: Also you might want to make a small title edit s/files/ebooks unless you are inquiring about other types of files as well.
When reading for pleasure I typically read paper, try to limit the screen time if possible.
~/github - just cloned repos
~/fork - everything forked
~/pdf - all science papers
'pjt' is my tag for projects
'sfw' is my tag for software and computer science
'doToo' is the name of this software project
'cmm' is my tag for interpersonal communications
Projects (tagged with 'pjt') is one of my five broad categories of files, with the others being Personal ('prs'), Recreation ('rcn'), Study ('sdg'), and Work ('wrk'). All files fall into one of these categories, and thus all file names begin with one the five tags mentioned. After that tag, I use the '>' symbol to indicate the following tag(s) is/are subcategories.
Any tags other than those for the main categories might follow, as 'sfw' did in the example above. This same tag 'sfw' is also used for files in the Personal category, for files related to software that I use personally--for example:
Here, NameMangler is the name of the Mac application I use to batch-modify file names when I'm applying tags to new files. '@nts' is my tag for files containing notes.I also have many files whose names begin with 'sdg>sfw' and these are computer science or programming-related materials that I'm studying or I studied previously and wanted to archive.
A weakness of hierarchical organization is that it makes it difficult to handle files that could be reasonably placed in two or more positions in the hierarchy. I handle this scenario through the use of tag suffixes. These are just '|'-delimited lists of tags that do not appear in the prefix identifier, but that are still necessary to convey the content of the file adequately. So for example, say I have a PDF of George Orwell's essay "Politics and the English Language":
The suffix of tags begins with '=' to separate it from the rest of the file name. A couple of other features are shown in this file name. I use '_' to separate the prefix tags from the original name of the file ('orwell9' in this case) if it came from an outside source. I'm an English teacher and use this essay in class, and that's why the tags 'wrk' for Work and 'tfl' for 'Teaching English as a Foreign Language' appear. 'wrt' is my tag for 'writing', since Orwell's essay is also about writing. The tag 'georgeOrwell' is not strictly necessary since searching for "George Orwell" will pick up the name in the text content of the PDF, but I still like to add a tag to signal that the file is related to a person or subject that I'm particularly interested in. Adding a camel-cased tag like this also has the advantage that I can specifically search for the tag while excluding files that happen to contain the words 'George' and 'Orwell' without being particularly about or by him.
That last file name example also illustrates what I find to be a big advantage of this system: it reduces some of the mental overhead of classifying the file. I could have called the file 'wrk>tfl>politicsAndTheEnglishLanguage=sdg|wrt|lng|georgeOrwell', but instead of having to think about whether it should go in the "English teaching work-related stuff" slot or the "stuff about language that I can learn about" slot, I can just choose one more or less arbitrarily, and then add the tags that would have made up the tag prefix that I didn't choose as a suffix.
There's actually a lot more to the system, but those are the basics. Hope you find it helpful in some way.
in my main collection of files for mystartup, computing, applied math, etc.
All those files are well enough organized.
Here's how I do it and how I do relatedwork more generally (I've used thetechniques for years, and they are allwell tested).
(1) Principle 1: For the relevant filenames, information, indices, pointers,abstracts, keywords, etc., to the greatestextent possible, stay with the old 8 bitASCII character set in simple text fileseasy to read by both humans and simplesoftware.
(2) Principle 2: Generally use thehierarchy of the hierarchical file system,e.g., Microsoft's Windows HPFS (highperformance file system), as the basis(framework) for a taxonomic hierarchyof the topics, subjects, etc. of thecontents of the files.
(3) To the greatest extent possible, I doall reading and writing of the files usingjust my favorite programmable text editorKEdit, a PC version of the editor XEDITwritten by an IBM guy in Paris for the IBMVM/CMS system. The macro language is Rexxfrom Mike Cowlishaw from IBM in England.Rexx is an especially well designedlanguage for string manipulation as neededin scripting and editing.
(4) For more, at times make crucial use ofOpen Object Rexx, especially its functionto generate a list of directory names,with standard details on each directory,of all the names in one directory subtree.
(5) For each directory x, have in thatdirectory a file x.DOC that has whatevernotes are appropriate for gooddescriptions of the files, e.g., abstractsand keywords of the content, the source ofthe file, e.g., a URL, etc. Here the filetype of an x.DOC file is just simple ASCIItext and is not a Microsoft Word document.
There are some obvious, minor exceptions,that is, directories with no file namedx.DOC from me. E.g., directories createdjust for the files used by a Web page whendownloading a Web page are exceptions andhave no x.DOC file.
(6) Use Open Object Rexx for scripts formore on the contents of the file system.E.g., I have a script that for a currentdirectory x displays a list of the(immediate) subdirectories of x and thesize of all the files in the subtreerooted at that subdirectory. So, for allthe space used by the subtree rooted at x,I get a list of where that space is usedby the immediate subdirectories of x.
(7) For file copying, I use Rexx scriptsthat call the Windows commands COPY orXCOPY, called with carefully selectedoptions. E.g., I do full and incrementalbackups of my work using scripts based onXCOPY.
For backup or restore of the files on abootable partition, I use the Windowsprogram NTBACKUP which can backup abootable partition while it is running.
(8) When looking at or manipulating thefiles in a directory, I make heavy use ofthe DIR (directory) command of KEdit. Theresulting list is terrific, and commonoperations on such files can be done withcommands to KEdit (e.g., sort the list),select lines from the list (say, all filesx.HTM), delete lines from the list, copylines from the list to another file, useshort macros written in Kexx (the KEditversion of Rexx), often from just a singlekeystroke to KEdit, to do other commontasks, e.g., run Adobe's Acrobat on anx.PDF file, have Firefox display an x.HTMfile.
More generally, with one keystroke, haveFirefox display a Web page where the URLis the current line in KEdit, etc.
I wrote my own e-mail client software.Then given the date header line of ane-mail message, one keystroke displays thee-mail message (or warns that the dateline is not unique, but it always hasbeen).
So, I get to use e-mail message date linesas 'links' in other files. So, if somefile T1 has some notes about some subjectand some e-mail message is relevant, then,sure, in file T1 just have the date lineas a link.
This little system worked great until Iconverted to Microsoft's Outlook 2003. IfI could find the format of the filesOutlook writes, I'd implement the featureagain.
(9) For writing software, I type only intoKEdit.
Once I tried Microsoft's Visual Studio andfor a first project, before I'd typedanything particular to the project, I got50 MB or so of files nearly none of whichI understood. That meant that wheneveranything went wrong, for a solution I'dhave to do mud wrestling with at least 50MB of files I didn't understand; moreover,understanding the files would likely havebeen a long side project. No thanks.
E.g., my startup needs some software, andI designed and wrote that software. SinceI wrote the software in Microsoft's VisualBasic .NET, the software is in just simpleASCII files with file type VB.
There are 24,000 programming languagestatements.
So, there are about 76,000 lines ofcomments for documentation which isIMPORTANT.
So, all the typing was done into KEdit,and there are several KEdit macros thathelp with the typing.
In particular, for documentation of thesoftware I'm using -- VB.NET, ASP.NET,ADO.NET, SQL Server, IIS, etc. -- I have5000+ Web pages of documentation, fromMicrosoft's MSDN, my own notes, andelsewhere.
So, at some point in the code where somedocumentation is needed for clarity forthe code, I have links to my documentationcollection, each link with the title ofthe documentation. Then one keystroke inKEdit will display the link, typicallyhave Firefox open the file of the MSDNHTML documentation.
The documentation is in four directories,one for each of VB, ASP, SQL, and Windows.Each directory has a file that describeseach of the files of documentation in thatdirectory. Each description has the titleof the documentation, the URL of thesource (if from the Internet which is theusual case), the tree name of thedocumentation in my file system, anabstract of the documentation, relevantkeywords, and sometimes some notes ofmine. KEdit keyword searches on this file(one for each of the four directories) arequite effective.
(10) Environment Variables
I use Windows environment variables andthe Windows system clipboard to make a lotof common tasks easier.
E.g., the collection of my files ofdocumentation of Visual Basic is in mydirectory
Okay, on the command line of a consolewindow, I can type
and then have that directory current.
Here 'G' abbreviates 'go to'!
So, to command G, argument 'VB' acts likea short nickname for directory
Actually that means that I have --established when the system boots -- aWindows environment variable MARK.VB withvalue
I have about 40 such MARK.x environmentvariables.
So, sure, I could use the usual Windowstree walking commands to navigate todirectory
is a lot faster. So, such nicknames arejustified for frequently used directoriesfairly deep in the directory tree.
are used by some other programs,especially my scripts that call COPY andXCOPY.
So, to copy from directory A to directoryB, I navigate to directory A and type
which sets environment variable
to the directory tree name of directory A.Similarly for directory B.
Then my script
takes as argument the file name and doesthe copy.
takes two arguments, the file name of thesource and the file name to be used forthe copy.
I have about 200 KEdit macros and about200 Rexx scripts. They are crucial toolsfor me.
About 12 years ago I started a fileFACTS.DAT. The file now has 74,317 lines,is
bytes long, and has 4,017 facts.
Each such fact is just a short note,sure, on average
2,268,607 / 4,017 = 565
bytes long and
74,317 / 4,017 = 18.5
And that is about
12 * 365 / 4,017 = 1.09
that is, an average of right at one newfact a day.
Each new fact has its time and date, alist of keywords, and is entered at theend of the file.
The file is easily used via KEdit and afew simple macros.
I have a little Rexx script to run KEditon the file FACTS.DAT. If KEdit isalready running on that file, then thescript notices that and just brings to thetop of the Z-order that existing instanceof KEdit editing the file -- this way Iget single threaded access to the file.
So, such facts include phone numbers,mailing addresses, e-mail addresses, userIDs, passwords, details for multi-factorauthentication, TODO list items, and otherlittle facts about whatever I want helpremembering.
No, I don't need special software to helpme manage user IDs and passwords.
Well, there is a problem with thetaxonomic hierarchy: For some files, itmight be ambiguous which directory theyshould be in. Yes, some hierarchical filesystems permitted to be listed in morethan one directory, but AFAIK theMicrosoft HPFS file system does not.
So, when it appears that there is someambiguity in what directory a new fileshould go, I use the x.DOC files for thosedirectories to enter relevant notes.
Also my file FACTS.DAT may have suchnotes.
Well, (1)-(11) is how I do it!
For photos folders per device/year/month.
For Office documents pre-pending date using the ISO date format (2017-06-21 or 170621) works great. (for sharing with others over various channels like mail/chat/fileserver/cloud/etc)
It is really boosting my understanding of the French language, and giving me more confidence to speak it.
It's a simple story that's easy to follow, especially having read the book in English and seen the film a couple times. And really, how lost can you get? If you can't follow a paragraph or to, chances are he'll still be stuck on Mars for a while and you won't have missed much.
It's written in an informal, conversational style, using language that real people might use. I find myself reading a phrase that translates back to a saying I've used in English. Ah, looks like they use that in French too. I'll add it to the repertoire.
I can pick it up after a while off and quickly get back in to it without explanation. Hmm... this looks like the part where the guy is stuck on mars...
And as a bonus, it's kinda hard work to read in a foreign language, so if I pick it up in bed it's guaranteed to put me to sleep inside of half an hour.
Just started this book last night. The story begins as the Founder of Clif Bar walks away from selling his company and a $40M personal pay-out. Big idea so far, your business is an ultimate form of self-expression. > https://www.goodreads.com/book/show/29691.Raising_the_Bar
It is a wonderfully written memoir that perfectly details the grad school experience and also includes some helpful notes from the author. I'll be graduating next year (bachelor's in CS), and my dad asked me if I wanted to enter grad school. The book sure did add some fuel to the fire.
Engineering a Safer World, https://mitpress.mit.edu/books/engineering-safer-world
Software Specification Methods, https://www.amazon.com/Software-Specification-Methods-Henri-... (also available through Safari Books Online, at least at my office)
Read most of the third one this week, a useful comparison of the various approaches. My objective is to understand how to better produce formal (or more formal) specifications. Either for whole systems or just for significant or critical portions of them.
Here are the books I've read and want to read: https://booknshelf.com/@tigran/shelves
Here's my (unfinished) reviews of the books I've read so far this year: https://github.com/bcbrown/bookreviews/tree/master/2017. At the end of the year I'll flesh them out a little more.
Harold Coyle, Team Yankee - WW3 in Europe in the 1980s from the perspective of a tank company commander. Poorly written, in my opinion, but the accurate (or so I hope) descriptions of the military tactics and equipment almost make up for it.
James Gleick, The Information: A History, A Theory, A Flood - excellent book about the history of information.
It goes into detail about the Mount Everest disaster in the 90s.
The Dark Tower II: The Drawing of the Three
Wanted to read the first one before the movie came out, now I am hooked...
Seveneves, Neal Stephenson
Astrophysics for people in a hurry, Neil De Grasse Tyson
Grasping the fundamentals means that when it comes to policy decisions (e.g. in the management of certificates) you can see what the consequences of a particular decision are, rather than just hoping that whoever proposed that policy knew what they were doing.
For example, I think a lot of people today use Certificate Signing Request (CSR) files without understanding them at all. But once you have a grounding in the underlying elements you can see at once what the CSR does, and why it's necessary without needing to have that spelled out separately.
Or another example, understanding what was and was not risky as a result of the known weakness of SHA-1. I saw a lot of scare-mongering by security people who saw the SHA-1 weakness as somehow meaning impossible things were now likely, but it only affected an important but quite narrow type of usage, people who understood that could make better, more careful decisions without putting anybody at risk.
1) https://www.ssllabs.com/ssltest/ - try to get an A+. It's not important to in most cases in practice, but you'll learn a lot getting there. Their rating guide is also handy: https://github.com/ssllabs/research/wiki/SSL-Server-Rating-G...
2) MITM yourself. I've done this using Charles, you can do it with any HTTP proxy that lets you rewrite requests on the fly - I hear Fiddler is popular. MITM yourself and try changing the page for an HTTP site. Then try doing it on a website that is part HTTP part HTTPS (e.g. HTTPS for login page for example) and "steal your password". Try again on a website that redirects from HTTP to HTTPS using a 301 but does not have HSTS. Finally try on a site with HSTS (nb: you won't manage this one). Congratulations, you now truly understand why HSTS is important and what it does better than most people!
3) Set up HTTPS on a website. You've probably already done this. In which case maybe do it with LetsEncrypt for an extra challenge?
It doesn't hold your hand at all, but it gives you a nice "task" to accomplish. Reading up on all the terminology and exactly how and why it works was really fun.
There was also a nice web page presenting all kinds of PKI concepts that I came across a few years ago but haven't been able to find since then. :-(
After running my own email server for 15 years I gave up a couple of years ago and paid for someone else to solve the nightmare of dealing with the big email gatekeepers.
SMTP isn't a secure transport.
Having your email stored on someone else's computers (ie the cloud) is not necessarily 'secure'.
Having a well-constructed and well-managed host somewhere you physically control seems to me the most 'secure' arrangement, which is what I have always had. Currently for the cost of a Raspberry Pi and occasional 'apt-get update' etc.
That said, there are some things you should be aware of when running a mail server:
1. You need to make sure that the IP address and domain name that SMTP is bound to is not on a blacklist. You also need to consider the trustworthiness of your host because you could very well get caught in the cross-fire if one of their other customers gets them range banned. Certain cloud providers that make it very easy to change IP will more than likely have all of their addresses on some blacklist or another.
2. You also need to make sure you have matching forward (A record) and reverse (PTR record) DNS records for that IP address. This is called Forward-confirmed reverse DNS, aka FCrDNS. Many mail servers will reject email from servers that do not have or have mismatching records for FCrDNS.
3. You must set up SPF and DKIM. Many mail servers will either reject mail from servers without these, or at least weight heavily against it.
4. You probably want to make sure TLS is set up properly, otherwise your mail is going to travel the internet in plaintext.
5. The IP address you're sending from is going to start off with no reputation. The volume, type of mail, and how many people mark your mail as spam is going to decide whether other mail servers start filtering you or not. You may have no problems here. If you're unlucky, you will need to try to reach out to whichever major mail provider is filtering your mail. Many of them have a ticketing system for this, but you'll be at the mercy of whomever is working that ticket. There are also various whitelists that might be worth trying your server on. They're usually very selective and will probably reject your request.
6. You really, really need to make sure you've got your policies set up correctly because you do not want to accidentally set up an open relay that will be used to spam other people.
7. Greylisting is a very, very effective means of spam filtering. The downside is that mail from new servers wont be delivered instantaneously and will instead be delivered whenever their mail server tries to deliver it again. Other than that, most spam is malformed in some way so some basic DNS checks will filter a ton of it. There are also free RBL and DNSBL lists that will pick up the slack.
 http://www.iredmail.org/ https://mailinabox.email/ https://mxtoolbox.com/blacklists.aspx https://en.wikipedia.org/wiki/Open_mail_relay
Go a small claims court and pay probably not more than USD 200.
Most other suggest to ask nicely. You already did, nothing happend and from history i doesn't seem like it ever will.
Moved my ETH and LTC elsewhere and sold to a private buyer.
Read more at CB Insights: https://www.cbinsights.com/company/coinbase
Then if they haven't put things right in the time limit you'd go to court, which can be done online now.
I think a similar process exists where you are, and would focus coinbase into fixing the problem.
I wonder how you will deal with taxes. You offer a service (number of hours) against payment and registered as Ltd (for profit).
The simplest answer is just to parity the math courses of a Berkeley / MIT / Stanford CS degree, although it will likely be a little overkill, especially if you intend to limit yourself to a strict subset of TYCS. For example, databases and networking generally require very different math prereqs than computer graphics or machine learning.
You will need a high school level of math (grammar school math, algebra, trigonometry, basic stats) to be able to program most things.
Discrete math is used heavily in many parts of CS (it is integral to understanding how to accurately negate programming expressions).
You should probably understand calculus at a high level, although my experience with actual calculus usage in my career is zero.
Probabilities are used heavily in concepts like caching / performance, which will touch OS, arch, data structures, and likely others. For this, you should find a "statistics for engineers" type of course / book for undergrads, which may or may not make use of calculus to prove some of the statistical concepts.
Linear algebra is used heavily wherever graphics cards are used, so graphics, video, machine learning, etc. Linear algebra will likely have calculus as a prerequisite.
Modulo math is used heavily for cryptography and some data structures (hash tables). An undergrad will get a few days or weeks of this, and probably not an entire course.
Set theory and graph theory are used sporadically. Networking, distributed systems, etc will make use of them.
I would highly recommend against this.
If you lend money to someone, expect to never get it back ever, regardless of the deal you made. I know this from personal experience.
What I also know from personal experience is not to expect someone to learn how to code because you want them to. I gave my old laptop (which was still working well) to my cousin under the condition that he completed a single Udacity course on programming. I will tell you right now that he did not even come close to finishing the course.
I don't know you or your friend of course, but if I had to put my money on it, your friend is not going to learn to code, and you're not going to get your money back for a long time, if ever.
TL/DR: Lessons from teaching a friend to code.
I have a friend who was looking for a career change. I've spent somewhere between 75 and 150 hours helping him learn to code. (Web development.) Here's what I learned in the process:
1. I highly overestimated how quickly one could learn web development with no prior programming experience. I was too optimistic, and I told him if he put serious time in, he could have the skills to build a simple web app in 6 months. He put in a more realistic amount of time than I'd suggested, balancing other areas of life. It took him closer to 18 months, including enrolling in a coding camp, which he's now about to complete.
2. Charging money for a service helps people take it seriously. At first I didn't charge him, but then I took the advice of a friend who has that philosophy. This isn't definitive evidence, but I think charging for the training helped both him and myself to take it seriously and put effort into it. He's now about to graduate from a code camp, and I'm not sure if he would have done it if not for establishing that mindset that this training is valuable. (I recognize the value of code camps is debatable.)
3. Motivation is an important (and tricky) thing. There were times where he was spending more time on video games than programming. But I remember when I was learning, and programming felt very hard and mysterious for years before I began to feel comfortable making an entire project on my own. The difficulty level was demotivating at times.
I typically charged a couple hundred bucks so that they would have "skin in the game". 100% of the people, I or someone else funding, failed to finish the course. Having skin in the game is absolutely critical to their success.
In this case, it seems they have nothing to lose, and I suspect if they are willing to beg you for cash, they would have no problem going elsewhere.
I suggest some sort of deal where he has to put something in other than "time and effort". Perhaps have him "pay" you in other ways, such as chores around your house. Mowing the lawn, etc.
I was teaching JS basics to a friend the other day who was interested, and he said something that I thought was particularly well-worded. When he stumbled across the idea of classes (as in OOP, that is), I said he should avoid them for now because it's too advanced and it would just make things confusing. I encouraged him to focus on basic functions and control flow.
He demurred, insisting that we do something "actually interesting" and had me teach him how to create a class. He likened the motivation to learning Brazilian jiu-jitsu:
> I don't want to spend hours practicing passing the guard, I want to learn to rip someone's fucking arm off.
And having learned some BJJ myself, and having experienced that exact same desire and irritation, I couldn't help but sympathize. Passing the guard is a crucial part of BJJ, but it feels quite basic and uninteresting in the beginning, much like if statements and for loops.
The point is, make sure your learner is working on something he finds interesting, like trying to put together a basic calculator app or anything concrete that he can relate to. Empty isolated exercises that aren't leading to building anything are detrimental to interest and motivation.
Who knows if he really has motivation to learn to code at all, or is just doing it to get your money. Without his own internal motivation, learning is not gonna work. But even if he has internal motivation, this is still like loaning money to friends but worse.
The barrier to entry is so low -- talking about learning to code, not necessarily finding employment. I have tried with coworkers and family members who wanted to do it for the money and their heart just wasn't in it.
That said, here are some tips. Note that I have never taught someone to code, but I am familiar with mentoring someone through a skill set I already have. Consider all languages as placeholder terms for whatever stack you are going to teach, they are what I started my career in, so I'm using them here.
- Beware the curse of knowledge. Yeah, I know XYZ the most obvious thing in the world, but if you think back to the dawn of time, you'll remember when you didn't understand how a function worked.
- Start slow. This builds off the last one. Start with the basics. I personally learned to code in the following order. HTML > CSS > JS > jQuery > PHP > MySQL > PHP. Start easy and lay a solid foundation, then build on that.
- Teach the language before the framework. Okay, this is based off my learning experience rather than my teaching experience. However, if you want your friend to fully grasp and be able to keep going, teach them JS before jQuery and PHP before Laravel. Show them how the magic works. It will make it so much easier for them (and you) down the road.
- Have fun. I know that's a cliche ending to every list ever. But remember to make the process enjoyable. Presumably, you are a programer, and if you are anything like me, you love what you do. Try to instill that in your teaching. It'll make your friend more likely to stay and learn without fighting the process.
My gut feeling is that it will be hard to make work. Learning to code takes a long time (a bootcamp is 11 - 17 weeks at 60hrs a week, so 660 - 1020hrs). However keep in mind that's entry level proficiency.
I think the best outcome would be that this person learns enough to get into a bootcamp. You'd be shocked how many people apply who just aren't ready to even start. They could learn enough with you that they find out if they like it or not, and if so from there they can take out a private loan to attend one. Keep in mind, I'm not sure how predatory (or not) the companies giving private loans to bootcamp grads are, but it is an option, and at ~$17k in cost, it's steep but not life derailing if doesn't work out (my guess is it's about the cost of a broken bone if you're not insured, just a guess there).
If you make the expectation they learn to code to get a job, it probably won't happen, if you level set that they learn enough to get into a program and OWN IT, then perhaps you'll have some success.
Just my two cents, hope this helps!
I do question the lending money thing but maybe there's some aspect of his character that makes you trust him enough to risk the relationship that we're not aware of.
A friend of mine pushed and pushed and pushed me to learn Python, and at one point even paid me a small amount to make two plugins for Anki(a flashcard dekstop app), which was really difficult, but are still on my Github today.
That was enough of a push that then, I pushed as hard as I could into Django, got my first gig, and a year later switched to Ruby/Rails, and have been growing ever since.
I highly recommend this course of action. It absolutely changed my life, and brought me from barely being able to scrounge out a minimum wage job after crossing the country, to making more money than my Father and being able to live effectively wherever I want, and in just 5 years. It's been incredible, and it's all thanks to his kindness toward me.
For your friend I think that is great as you avoided any kind of conflict of interests for him.
I would say since money is involved and he doesn't necessarily have the same strong motivation that self learners usually do that it's important to lay down clear rules, i.e. what does it mean to learn to code. Ideally this should be output based rathe than input oriented, like building a certain simple app. That's also extremely rewarding.
I always tell people to follow the stanford course on swift (from itunes u), it's absolutely brilliant. Depending on his background something else might be more suitable, but in any case you could act as the TA checking his homework, but also as a classmate/teacher that can answer questions. The bulk of the presentation of new material you ca safely leave to a mooc i think.
People are either able to code or not. Teaching does not work. Those who are able to code almost entirely pick all the skills by themseves. If a 'natural born' coder gets into some formal environment, such as university or something, suh a person in two months surpasses the level of all the peers and the direct instructor as well.
In the university I was trained for automatics. But I quickly learned that coding takes me no effort at all, as opposed to, say, understanding electronics. After reading Wirth, Kernighan & Ritchie, and Stroustrup I often found myselfs hinting students from the programming department how to perform their tasks as they were scratching their heads and I was just passing by.
This has nothing to do with inteligence. I'm perhaps not very smart. When I starred at some scheme I had no idea if this an amplifier or something else, what is the role of one resistor or another. At the same time mates from my group read it as it was written in plain English (err, in plain Russian to be precise). But the very same persons were totally unable to code. It's very strange. For me coding is trivial and takes no inteligence. This is why I do it for living. The path of the least resistance. I'm kind of puzzled why persons smarter than me cannot code.
Anyway, after reading some foundational books the only thing that helps is reading other people's good code. For me it was reading pieces of the old (around 90's) BSD and GNU code.
I never met a person I'd appreciate for directy handing me over any useful coding skill. YMMV.
There are two problems. If you substitute "gardening" above for "coding," my statement is still true. The second is the nature of learning exercises. We all know that a real problem to solve generates motivation that is much more valuable than, and can self-generate syntax and algorithm knowledge.
With all the free tools, free courses, and free pdfs around, I wouldn't try to train someone who is not already brimming with questions generated by real frustration.
Coding is hard, and takes a lot of hours (like, a thousand or two) to get to basic proficiency. So yeah, you need a lot of will power and motivation to get it done. Especially outside formal setting like university, where you are pretty much forced to do it.
I know plenty of smart people who have uninspiring jobs with low salaries, and they keep talking how they will learn to code, but they don't manage to go past hello world.
If you judiciously assign him tasks to work on, help him get started but encourage him to use solid Google/Stack Overflow skills to solve his own problems, you may end up truly helping him.
The alternative, which scares me a little, is that he will start working on tutorials, get bored, start to doubt himself and then completely disappear from your life because he failed and can't pay you back.
You are saying 'Learn to code, that career path worked for me, it will work for you'.
Unfortunately my daughter broke off the engagement so we didn't get a chance to work on much together but he went on to a career in coding so it worked out pretty good just the same.
Most people simply don't think that way, end of story. Some do though, I hope you picked one of them.
I wouldn't worry too much about teaching him to "code" as there are a lot of ways to be valuable in the industry in general, and helping your friend is a very good thing to do.
In any case, I think you will probably get a lot out of this . Just don't try and force a square peg down a round hole.
Pace, and not the pace of progress, the literal pace of people.
20 years ago when I moved to the bay area, from the east coast there was still a cadence that was much slower than the "new york minute" that I was used to.
Today everyone is rushing from one thing to the next, we all seem compelled to know and respond instantly. Almost everything you listed can be answered with some form of "faster" (smaller elapsed time) than we could have done it previously, and I think that is a big deal.
The "stream-of-consciousness" bit is enabled by the two key features: you choose a finite duration within which to write, and if you stop typing more than a few seconds, your writing is deleted. This essentially forces you to continuously type for the session, and at least for me and the users I've spoken to, this forces out thoughts/ideas/feelings that otherwise wouldn't have made it to the keyboard.
I've personally been using it routinely for months as a therapeutic journal, and at this point I've practically been Pavlov'd into opening it up whenever I'm under cognitive/emotional duress.
it's open source (http://github.com/krrishd/write), and I appreciate feedback!
My main job requires a ridiculous amount of file and data transfers that are mostly scheduled to run during off-peak hours. I needed a way to centralize the results of these jobs in order to keep tabs on things. I built this as an in-house tool and then discovered a few services already existed for this. I thought my solution offered some things these others didn't, and if somebody was paying these other services I might have some success as well. It's been a lot of fun, and if anyone has any suggestions I'd love to hear them.
A telegram bot that sends me NBA related tweets from the ESPN Stats & Info twitter - https://t.me/nbaespnstats - https://github.com/assafmo/nba-espn-stats-and-info-telegram-... - which was amazing during the 2017 playoffs and made the whole watching experience awesome for me. The channel also have around 20 followers right now, so I guess others like it to. :-)
A script that downloads all my shows every day - assafmo/DownloadMyEpisodes
But it can be used for so much more ( https://mypost.io/post/what-can-i-do-with-mypost ).
It is completely free to use. I don't have any plans to charge for it, and have not even added advertising or anything to it yet, but it still receives maintenance and updates, though no more major feature implementations are planned. It was my first web app and taught me a lot, from learning the basics of database programming to a friendly UI that could be understood by everyone. My sister, who is not very tech or computer savvy, was the beta tester. Whenever she questioned something or got stuck on something, I redesigned that feature to make it even easier. Whether it was functionality or the wording.. if she questioned it, it was redone.
It boosted my confidence into the web app world. Right now, I've got about 8 more web apps in the works, 3 of which are in the stages of beta testing, and though there is a free version, they will actually be paid subscription to access additional features. So I am proud to boast about this project, as it was the start to my empire.
I've built many things before but why am I proud of this one specifically? Basically because I've built it with no expectations what-so-ever if this thing will ever be needed by someone else but me. Also, I've built it fast (less than 1 week), polished it a bit, and released it as soon as it was working ok-ish..
And why am I proud of being able to build it although it is not complete? Because I used to deal with perfection for so long that I had to force myself to release anything at all. In fact, it used to be very hard for me to even start doing anything for myself, as I would have analysis-paralysis. For quite some time I had to force myself to think "when good is good enough", read a lot of things about that subject, read other people opinions on these things, etc. etc. etc. After figthing with my own perfectionism, it seems that I finally can do things having lower expectations.. That's why I'm proud..
Waiting for Firefox to approve the add-on now.
I've built http://remindoro.com, a chrome extension to get repeat reminders.
http://palerdot.in/moon-phase-visualizer/ - A simple web demo to understand moon's phases and eclipses.
All of these stuffs are open source (my github - https://github.com/palerdot) and I'm proud of these tools
Just a little tech news aggregator I put together using React and Node. Pulls the top 10 stories from HN and a bunch of subreddits, and pushes updates to the browser every 15 minutes via socket.io.
I've still got plenty of improvements to make to it, but I'm trying to break the habit of working on side projects that I don't ship. So I've shipped this one, even though I won't consider it 'done' for quite a while yet. :)
Not because it was technically difficult, but because it solved a problem that me and seemingly hundreds of other people who signed up are having.
I found a naive, yet effective way of adblocking podcasts which is easily scalable. Although it's not yet released, early access is close to releasing and I'm hoping that it takes off. Really proud of it because it's incredibly cross-dimensional (i.e., marketing, programming, &c.) and that having a podcast adblocker is non-trivial problem to solve.
None of it is public, however, for obvious reasons.
In a block level encryption each sector is encrypted below the file system. Doing the nave thing of encrypting each sector with the encryption key is fundamentally insecure. This is called the EBC mode of operation. There's a nice picture of a penguin on wikipedia encrypted with ECB which demonstrates this:
Secure mode of operations generally try to propagate the result of previously encrypted blocks to the next ones. But this approach is not really suitable for mass storage devices. You cannot re-encrypt all the sectors behind the one you just changed. That's just impractical, since writing to sector #0 amounts to rewrite the entire disk.
So in practice schemes like AES-XTS are used. They work by having some kind of way of "tweaking" the encryption, so that it is different for each block (avoiding the pitfalls of ECB), but in a way which allows random access to sectors (i.e. in a way that is predictable). AES-XTS is a tradeoff for this special use case but it is not as robust as more classical modes of operations which would typically be used in an encrypted filesystem.
Details about AES-XTS issues:https://sockpuppet.org/blog/2014/04/30/you-dont-want-xts/
The encryption standards it uses are pretty good, but that is not where blanket whole-disk encryption (which I assume you're talking about) fail. For example, hackers could analyze the preboot environment of an encrypted mac and sniff out the password using a variety of methods. Simply put, whole-disk encryption is too complicated and bug-prone process to really trust to closed-source software.
As for single-file encryption, which is relatively neat and simple, Disk Utility would probably do a pretty good job.
Basically, I want deploying to be super boring under the hood, but SUPER AWESOME in the office.
Another idea I had was one of those TNT detonator devices, with the handle that you press downward, and it lights up a bunch of lights and then has a little LED animated explosion on the wall. Or the giant hammer thing where you slam a hammer into a thingy on the ground, and the weight goes flying up, and it has to hit the top in order for the deployment to begin.
Ya know what, let's just take all carnie games and turn them into deployment mechanisms. HOW AWESOME WOULD THAT BE??
Software is fun, but hardware is now "easy" -- There are plenty of hardware starter kits from places like adafruit and seed studio that you could drop into your office and let people have at it.
A few hundred bucks (and lets face it that isn't a lot if your doing software) can get you a lot of toys for people to play with and explore with.
(discussed here https://news.ycombinator.com/item?id=14582187)
- save 25x your annual spending and never work again
- start by saving at least something (even 1%) and save 50% of all future raises
- long commutes are for fools. So are new cars--buy used.
- spend on things that you value. I've given myself a tech budget for years because good tools matter to me
- host a dinner party instead of eating out (most of the time)
- If you have a gamblers mindset to investing, carve out a small portion (10%?) of your money and use it for risky investing. I call mine the 'casino fund'. Track your returns.
- read voraciously about finance and early retirement. You only need about 20 books or so to gain a background that is easily more valuable than your college degree. This is a good start:https://www.reddit.com/r/financialindependence/wiki/books
Your spouse should have a career or should think of having a career of his/her own. Its not about having a lot of money, its to eventually have someone as a financial backup in case things go wrong. Works for both partners.
I love my wife but financially I am in trouble. I make enough money but she has no career aspirations. Her family is quite poor and I had to get mortgage for a house for her parents. In future I will also need to worry about their health expenses.
This effectively means I can never get out of the rat race.
A good piece of advice I picked up from Rami Sethi (when it was still worthwhile reading his blog) was think about how much of your free time per month do you spend on various activities (Facebook, Gaming, etc). How much of that free time is devoted to thinking about your personal finance? If it's less than you think you should be doing, schedule it in.
Also starting reading https://www.reddit.com/r/personalfinance/ regularly.
Now I make sure I have a year's living expenses available outside of my investments. If I get tired of my job, I can just quit knowing that I have the cash to float myself for a while.
A year may be more than you need but at least six months is a good minimum. You'll have the cash to cover a job loss, car troubles, most medical expenses, etc. on hand without going into debt. And an emergency fund should be liquid and save, not invested and at risk. You may only get 1% in a savings account but view the low returns as the cost of insurance, since that's effectively what an emergency fund isself-insurance.
But if I could go back to age 25, before I was married, I'd have told myself to travel to more far-flung places. Being married, I have to:
A) Agree with my wife on where we want to travel
B) Have time to travel that works for both of our schedules (which is difficult to find... plus we have to spend at least some of our time off going to visit our respective families, and now I have two families to visit instead of one)
C) Have the money to travel. In our case, we have two incomes, but still, it was much cheaper when I'd travel with friends and cram four people into a cheap hotel room.
I'm not complaining here. I'm fortunate to have spare income that let's me travel quite a bit with my wife and it's really a fantastic experience to travel with your partner. But there are trade-offs that I simply didn't have as a 25 year old. So those places that are far away and hard to get to? See them while you're young.
Meet and keep in touch with as many people as possible. Switch jobs, travel the world, volunteer and always _always_ make new connections.
The best financial (and personal) gains you will make in life will come from the right connections.
1. Reduce all bills/belongings to bare essentials to live minimally.
2. Pay off all debt while maintaining $1,500 emergency fund.
3. Save 6 months living expenses.
4. Invest in yourself with excellent groceries, gym membership/local park visits, medical/hygiene care and other healthy habits.
5. Invest in Vanguard's Total US Stock, Total International Stock and Total Bond ETFs (% as age) and don't touch it.
6. Invest in building your own business - tech or otherwise.
Anyway, my advice would be:
Start meditation sooner.
I would give myself many other advices about risks, people and self-acceptance, but I would have not being able to listen to them at that time.
That's the problem with advices, you must be in a place in your life where you can actually use them.
But I would be able to meditate and figure it out, since that's how it happened.
Replace that with any tool that helped you develop yourself.
If you don't have such a tool, find one quickly that suit you.
Oh, and yes, travelling help, so do so. But you'll reach a limit in what it brings to the table. You need to find a better tool on the long run. Just like money helps, but has a max amount after which it won't make you happier.
If you can manage to get that one done, you can actually act on the rest of the financial advice in this thread. If not, you will have to be in unanimous agreement to do anything wise with money (i.e. keep emergency fund, plus six months living expenses in liquid savings), whereas foolishness may be undertaken unilaterally.
It cannot be translated easily, but more or less it amounts to:
Do the (financial) math often, limit your cravings, spend less than you can gather.
Contribute more to an index fund.
Save harder for a deposit. High rent/shared housing is horrible.
Don't try to keep up with the Joneses. There'll always be someone richer, with a nicer car you can't win that game. You weren't born in to money, don't even attempt to act like it. Live below your means.
You need to treat yourself far less often than marketing companies would have you believe.
Also, start mining bitcoin.
Keep your cost of living the same when you see large pay bumps or raises. This means the big things like car, house/rental, etc. Don't just go get a new car and increase your spending or move to a "nicer" apartment or buy a house because you have the money available. Keep the car, stay in the apartment and save the extra money.
People will stay that owning a home is an investment - maybe in some areas it is - but not all. If home values are relatively flat in that area or grow very slowly then it is a losing proposition. You will be paying property taxes, school taxes and all the other "taxes" of owning a home: maintenance, repairs, accumulating "stuff" to fill it, etc. If the growth in that area is slow then that is all money down the drain - you won't get it back when you sell.
1. Max out employer's 401k match. 2. Build emergency fund to 3-4 months expenses3. Max out Roth IRA4. Pay off low interest loans (if you have high-interest loans, which I don't/haven't then this becomes #1 and pay them off first).
Oh yeah, you don't need to save that internship money, I'm good now. Besides, I make like 5-6 times what you are making.
Understand buying vs renting before doing either.
Keep monthly (recurring) expenses low.
If you absolutely need a car, keep your ego in check and look at mileage & reliability.
Think long term.
The more money you make, the more nice things you acquire, and the harder it is to imagine living without them. At the furthest reach, it's a private jet--the crack cocaine of travel.
Develop these appetites with great caution.
Best financial decision i've made was to buy some ETH (Ethereum) last year.
What are your personal goals in the next 5, 10, 30 years? What do you plan on doing that requires money? How much money does that require?
Without knowing anything about your goals then any advice you will get (as demonstrated in this thread) will steer you toward structuring your life around saving money and getting safe but modest returns. Is that what you're asking for?
Here is a question you should ask yourself probably every 6 months:
"If I had infinite resources (money/whatever) what would I do?"
Take that answer and then figure out how to accomplish that without infinite resources.
Rent. Home ownership only starts making sense on a 5+ year time frame, in some markets 10+ years. Having the ability to move for a better job will reap huge financial benefits, and moving for a short commute will allow you to have so much more free time.
Save vigorously, but not so much that you have a dreadful life now, pining for the future when things will get better once you have "retirement money."
2. Be highly skeptical of most of the financial services industry, especially those selling load funds, insurance, annuities, and who want to manage your money.
3. Enjoy simple cars, or no car if you can manage. The amount of money I've seen friends and family dump into vehicles over 25 years is staggering. I don't even see cars at this point. I don't care what others drive, and I don't care what I drive so long as it's reasonably comfortable, safe, economical.
Ideally, you'll come to realize that trading is a waste of your time, and you should set and forget a regular investment flow into the Wilshire 5000 or something equally diverse.
Lastly, don't let FOMO lure you into into investing in the new hotness of your age. For me, it was Internet stocks in 98-99. By the time you're hearing about it and it's productized in a way consumers can get involved, it's too late.
- Always take into consideration mental health cost. Your commute, your work, the people you choose to surround yourself with. Debt in this area is unpredictable and therefore dangerous in the long run.
- No one has it figured out. Youth will always be wasted by the youth.
If I had all the money lost from 'stock market corrections' on my investments, I could retire comfortably today.
Stop being a little fishy swimming with big fishies.
The interest paid today is a pittance compared to the risk. Save your after tax money in something with near zero risk until the interests rates rebound to make the reward worth the risk.
Move closer to your office. Even if the rent is a little more, the price is worth it if you don't have to use your car all the time.
The only advices missing are predictions that are possible only with actually observing the future (e.g. buy GOOG/AAPL/AMZN).
Also, put some money aside for savings.
- dont waste money on tv subscriptions
- play less videogames
- take greater care of your friends and relations
Im building my start up ejgiftcards.com. Its generating revenue with about 20-30% margins. Current revenue is about 50-60k per month.
- Go get yourself a savings account and a checking account. Fill only enough amount in your checking account that you need to get by the month. Remaining goes into savings.
- Buy a home as quickly as you can, in an affordable place in the outskirts. By the time your kids arrive necessary infrastructure will be in place. Also rent is just another form of tax. And having your own home also means some place to rest without financial implication when you are old.
- Take the 401K plan seriously.
- Max out other instruments such as the IRA and Roth IRA.
- Buy a durable, long lasting car. And stick with that as long as it lasts.
- Healthy life style. Nothing pays as well as good health. Buy a bicycle or play a sport. Ensure your heart is healthy and you are not obese. There are other things to this, like learning to cook healthy food. Remember bad health too will account for a big chunk of your earnings in a place like the US.
- Be frugal. Frugality means making decisions that pay on the longer run. $5 may buy you burger combo in McD but trust me it will cost you on the longer run. You don't want that kind of frugality. Which is why the learning how to cook makes even more sense on the longer run.
- Be productive, in all ages. Have free time to network and develop new skills. Never be afraid to start from the beginning or learn and do something new.
- Lastly save. Save a lot.