example: google.com -> http://www.5z8.info/dogs-being-eaten_x2r3rq_5waystokillwitha...
https://irc.verylegit.link/0x8c*download()194mobiads(windows... is supposed to redirect to Facebook, and it does if you use HTTP. However, over HTTPS Firefox just gives me a very generic "Secure Connection Failed" message. (Chrome is rather more helpful, giving me "ERR_CONNECTION_CLOSED".)
I'd have no qualms clicking on it, because my browser and I can handle suspicious websites. (Especially ones ending pdf.)
Something that would give pause would be:
I would think...wait a minute... I probably wouldn't click this example.
So, there's that.
(DNSMasq, router-based blocklist.)
seem dumb on same level as put bodies in blood next to road to check if people stop
The way that we got around this was adding another level of indirection, and putting printf format strings also as localized data.
Glory to Artstozka!
Potfiles are another option, but the tooling is pretty clunky and, in games in particular, people don't seem particularly attuned to their use. And they're not great for editing, though they might be for storage--when dealing with tabular stuff, it just makes a lot of sense to use tools that present a tabular interface. It makes life a lot easier.
What I'd really love to see an end-to-end example of a non-trivial production-ready project, with all its nitty-gritty details. I'd expect that having a sensible baseline you could look to for general guidance would help improve security and reduce risk.
At work I co-develop an open source Python library for reading VPC Flow Logs - it can be an easy way to get started analyzing them for security:https://github.com/obsrvbl/flowlogs-reader
My current job has about 14 different AWS accounts, a few are prod, some are lab and others are meta accounts. I've been thinking about having a dedicated account just for security related stuff but I see the value in collect cloudtrail, config and other stuff but, I'm not 100% sure it's worth the effort to get setup right now. Thoughts?
Very useful tool for UI prototyping.
Kind of already an established name in the Apple world.
Once I figured it out the bottom right chart was chilling.
this seem like something to justify showing those donation requests in your face, despite Wikipedia drowning in moneyhttps://www.washingtonpost.com/news/the-intersect/wp/2015/12...
* All the great powers have NSA equivalents. Meaning they play offence and defense in crypto, RF, and cyber. We (USA) can impose restrictions on our NSA but not on anyone else's. Our exploit-riddled networks are a playground for American, Russian and Chinese cyber warriors - and probably many others.
* In cyber, offense and defense become the same. Kaplan's book covers this. So a smart country seeks cyber-superiority. The more we hamper NSA, the more we empower foreign cyber-warriors.
* The focus has moved from RF to cyber. Giant antennas are far less important and giant datacenters are the new stars. Vacuuming up packets is less alarming when you understand we've been vacuuming up radio and telephone signals for decades. When comsats were important, NSA was vacuuming up their downlinks. When international telegrams were punched on paper tape, NSA's predecessors picked up the tape each day.
* The US has tried going "NSA-less". It happened in 1929 under the slogan "Gentlemen do not read each other's mail". That noble slogan led to the US operating at a disadvantage in the lead up to WWII. It doesn't pay to fly blind.
* Fear of an overreaching state is always justified; however we should focus that fear more on how NSA shares data than how it acquires it. For instance fusion centers: https://www.eff.org/deeplinks/2014/04/why-fusion-centers-mat...
Unlike NSynth, synthem80 is directed to a specific and humble goal - make early 80s-style arcade sounds. It uses a mini-language to control an engine similar to that in Pacman.
For instance, the sound when Pacman eats a ghost:
./synthem80 -o eat-monster.sw 'timer(t=0.5) wav(wf=1 f=2 a=45) add(a=< b=48) nmc(wf=5 f=<)'
Woo hoo you built a noise maker! Kazoos for everybody!
Seems like you could do some very quirky animation using ligatures. For example-- imagine that the ligature for "Im" tilts the "I" toward the "m". Then the ligature for "Imm" tilts the "I" further, and so on, until I type the complete word "Immediately" and the "I" has drifted down below the "y" character.
Repeat this drift for each subsequent letter typed, slowly increasing the amount of displacement for the letters in the ligatures. Then, by the time I've finished typing this comment, all the letters would end up piled up in the corner of this textarea. :)
It ended up being way too much work, and it isn't monetizeable due to the copyrighted nature of many logos.
The world, I feel, owes this man a great debt and I fear he is one of the great unsung open source heroes we seem to hear so little about!
There are obviously various reasons for this:1. Technologically the instrument ran its course. While the tech was still innovative in the 90s with the new digital effects, it hasn't changed much in 20 years.2. Electronic music however has brought in a lot of new sounds that gave pop music a fresh start in the 2000s3. People seem to be more interested in 2 things: dancing and lyrics. You could write songs with totally inintelligible lyrics and a good solo, and you'd have a hit. I don't think that's true anymore. Similarly a solo kind of ruins the dancing.
I think die hard rock fans need to get over it, guitar heroes are not coming back any time soon. I don't think it means the electric guitar is going to die. It still is an amazingly cool instrument. But, it means that your average kid may not want to try to play that "ten years after" intro anymore.
Sorry, but my 17 year old son was so inspired by Mayer about 5 years ago that he invested a LOT of time learning how to play guitar and sing like him, and other artists with similar styles. 
He is now building quite a steady music career even while finishing high school (he was booked for 3 gigs just this weekend).
He is also interested in past guitar heroes such as Eddie Van Halen, Mark Knopfler, Angus Young, Andy Summers etc. and spends a lot of time going through 'older' stuff to learn more.
While he has a lot of natural ability, there is no arguing that it takes a LOT of hard work. He practices for a minimum of 2 hours a day - sometimes even up to 4 or 5 hours, not counting gigging time. We often have to call him away from his guitar to do school work or eat.
I envy the time he lives in though - I started playing when I was 15, back in the early 80's and it was really difficult to find decent gear, and the only way to learn anything new was to try and figure it out by ear or find someone else who knew to teach you. Nowadays, the proliferation of Youtube and other online learning resources, the huge selection of reasonably priced gear, and things like software and hardware modelling amps mean players can dial in ANY sound they want under any situation. Unheard of in my time.
It just needs kids who are interested enough to turn it into their passion.
 - https://www.youtube.com/channel/UCJK-R3HGG09uGBRDs7fhpZw
When I started to learn guitar several decades ago, I would learn guitar solos off of records by slowing them down to 16 rpm (old turntables could do that) and moving the needle back repeatedly to listen to tricky phrases over and over again. It was frustrating, time consuming, and hell on on the records.
Today, for just about any popular and many obscure guitar-oriented songs, you can find a Youtube video where someone breaks it down note by note and chord by chord. There are all kinds of resources online for learning scales and theory and online communities where an aspiring guitarist can connect with thousands of other like-minded people.
I would like to see guitar-oriented rock and roll make a comeback. The heavy metal subculture is thriving without any mainstream radio airplay to speak of, but aside from that, there's just not that much going on.
If I see a local rock band play in a bar these days, about 80% of the time it will be all middle-aged men who have been playing for decades. Some of them are even retirement age.
The next Miles Davies won't play the horn, and the next Jimmy Hendrix won't play guitar. There will always be jazz, there will always be rock n roll, but the level of interest in those styles, particularly amongst young musicians, will slide inevitably towards the niche as the next innovative style comes along.
The world keeps turning. This is a great thing for music.
If you feel like discovering a new way around an instrument, and even music in general, and are not repelled by 70s/80s synth feel, enjoy youtubing his name.
The man was an extraordinary among extraordinaries. The guitarit's guitarist as they say.
Beside music, the notion of culture itself changed, it's palpable; the previous era was inspired a lot by music; today the passion has shifted down, at least as a mainstream thing. It's an industry in maintenance mode. Youngins may not be thrilled to be a guitar player, but in a way guitar heros aren't that much interesting. The instrument value in itself has not decreased.
Times change, and that's a great thing, yet people will always complain. It has happened to things I really loved too. That's just life I guess, but I've realized that being upset at culture change is just a self-destructive though pattern.
The piano is still around, just like the electric guitar will be in the future.
I don't think there will be any major technological advances that make factory-made guitars significantly cheaper and better than they are now. I think a more interesting direction is for guitars to become simpler and easier to build to the point where a non-expert can build one easily without a lot of exotic tools. (In a way, this has always been the case. Cigar box guitars are an old tradition; they're ridiculously easy to make, and can sound very good.)
If you think about it, a Fender Stratocaster is a very minimalistic design that was engineered to be easy to manufacture with 1950's woodshop tools (bandsaws, routers and jigs, etc).. Every Strat clone is a reproduction of a design made for that era of technology. When CNC machines came on the scene, a Strat shape isn't substantially easier to make than any other shape, but we keep using that shape because it works well and because of tradition.
A guitar design that's optimized to be easy for a non-professional to make with a CNC router and a laser cutter and some basic woodworking tools might look somewhat different. This could open the door to extreme customization -- one-of-a-kind harp guitars, unusual pickup arrangements, guitars with three strings and four frets, nine string guitars designed for 31-tone equal temperament or just tunings, or whatever you like.
I expect most guitar buyers will continue to buy traditional Fenders and Gibsons and so on with 6 strings and from 21 to 24 frets and a scale length of 25 inches, plus or minus half an inch. However, for those that want something different, there will always be a minority of tinkerers who build their guitar just the way they like it. That's where I think the most interesting advances are going to happen.
It's not hard to see why. Playing music is hard. The ratio of effort to reward is just terrible. I totally understand why people quit.
I've been expecting the death of the electric guitar since the '80s, but it seems surprisingly resilient.
Edit: In addition, although I'm incredibly old and don't claim to have my finger on the pulse of the zeitgeist, it seems to me that in the post White Stripes era playing shitty second user equipment is widely considered cooler than playing expensive new equipment.
Guitar playing lost its mojo when it was usurped by aficionados enamored by theory and technique and repulsed by swagger.
If I was 14 today, I'd grab a Maschine, load up some tracks in launchpad, and start emulating Avicii until I could pack a small dance hall.
Today, there are so many things competing for a kids' time - social media and messaging, mobile apps, video games, Netflix - that kids are choosing other activities instead of solitary, frustrating hours practicing guitar technique.
To become a proficient amateur-level guitarist, it takes around 2,000 hours of practice. That's equivalent to an hour a day, EVERY SINGLE DAY, for 5-1/2 years.
90% of kids learning guitar quit in the first 2 months (according to Fender) - most before they can play their first song well. The first few weeks are particularly brutal - it sounds horrible, it's painful on your fingers, and takes hours just get your first chord down.
In one sentence: it's just too hard to learn for the vast majority of people - and it's always been this way. But the difference is that these days, most kids would rather play Pokemon Go or Snapchat - and for kids who are musically inclined, it's so much easier and faster to become a DJ or producer than an instrumentalist, thanks to GarageBand and VirtualDJ and other easy-to-use software apps.
So a lot of musical kids are choosing that route. Why spend thousands of hours alone in your bedroom when you can be DJ'ing your first party in a few weeks?
So, how do we solve the problem of getting more kids to learn instruments, particularly the guitar? Some people have put lights on the fretboard (Fretlight, Gtar, Poputar) but in 25 years, that hasn't proven to make it much easier to learn. Others have gamified the experience (Rocksmith, Yousician) - but the learning curve is still extremely steep.
My company, Magic Instruments, has a different approach. We make it fundamentally easier to learn. Instead of starting by learning traditional guitar chord fingerings, we enable people to start playing chords using just one finger. This gives beginners an instantly positive musical experience - you can start strumming and playing your favorite songs from day one, and start jamming with others in a band in your first week. We then transition people over to learning traditional chords at their own pace.
We've seen 9 year old kids form a band in a few hours. Our hope is that we can inspire these kids to have a passion for practicing music, which will enable them to persevere for the thousands of hours of practice it takes to build the muscle memory to become guitarists.
As a guitarist, for some of my years professionally I can only say that this is mostly because you can emulate a lot of things today which normally required different guitars to bring out special sounds and because well most music today don't have guitar solos and thus it's hard to imagine guitar heroes coming out of music which doesn't put guitar in the front.
It's not just guitars though it's most other instruments. The real heroes today are the composers and producers.
If there is a slow death to guitar than it is because it's almost impossible to earn any money with it, except for the lucky few. Learning an instrument like guitar takes many, many years, which is quite a hobby.. If you would spend the same amount of time in learning software development you are almost assured to have a solid income as result.
If you want to see a resurgence of guitar playing you can't start with stadium guitar gods, you have to have the guitar house-party hero, and the local club hero, and the regional tour hero. And they're out there. Go out and see them play -- a lot of them are incredible.
I grew up in and around Austin so I'm biased and a little spoiled but there's nothing like a live local show.
There are more than a few parallels between putting together a band and becoming a success and putting together a company and becoming a success.
My theory is based on the fact that learning to play well is difficult and time consuming.
Popular music is by and large not the product of humans, moving muscles, energizing mass, thus generating sound waves. "Sounds" that we hear in pop music are in main generated digitally, usually with software instruments, in a computer.
Pop music is still uses the more traditional aspects of music composition, but only as a component of an ever expanding sonic palette.
Modern production is based increasingly on the ability to manipulate "sound" in a computer, and assemble it into listenable compositions.
The human voice remains one elements that is still generated by biological processes. But even the voice is subjected to increasing amounts of digital manipulation.
Learning to produce music in the modern style is also difficult, though in a different way from learning to play an instrument. Specifically, it is not a realtime process.
Given the finite amount of time and resources available to individuals, especially young people, it is inevitable that learning more modern pop production will be at the expense of investing in the extensive training needed to perform music in realtime.
This is compounded by the fact that the economic weight of the music industry is in the world of pop music, meaning the various strains of digitally created music. This is where the money comes in. People that want to make of a living doing music will increasingly need to be proficient in modern music production.
This creates a "virtuous cycle" which directs more resources towards this aspect of music, and a "vicious cycle" towards the traditional aspects of musical performance.
There are actually two significant technological forces that enable this structural shift in music creation. The first was the advent of recording, and mass distribution of music. It broke music away from the need to have human performers, playing in realtime, to hear music. This dramatically lowered the marginal cost of experiencing music.
The twin forces of time shifting and mass replication were turbocharged with the advent of digital audio. This, combined with the ever increased use of digital manipulated in music generation, amounts to a "singularity" in humans relationship with music. A line has been crossed that is permanent, it can never be uncrossed.
To be sure (ha ha), music will continue to be performed (and listened to) by live musicians, indefinitely. But it will be in the context of decreasing cultural influences.
The financial resources needed to support the creation of skilled musicians will continue to dwindle. This effect has been ongoing for decades in the world of orchestral music; now it has come for the world of all performed music.
One might think, what about live music? Won't there always be a demand for live, performed music? I don't think so. Or rather, it will continue the dramatic decline illustrated in the article by guitar sales.
Audiences seeme to respond just as well to shows that use essentially pre-recorded music. As long as there is a show of some kind, most of the music consuming population will not mind if the music heard at a show is "canned".
This makes me a bit sad, but ultimately the endeavors of human creativity will march on, inexorably charting new paths using the astonishing arsenal of software applications that are available these days at a very low cost.
How many people have the patience for that? Particularly in Generation Internet Points Right Now?
DUPE ( 2 days ago) => https://news.ycombinator.com/item?id=14617079
It is strange, but for a good chunk of time religious thinkers were "more right" about subjects like the eternity of the universe than the prevailing aristotelian philosophers, sometimes deploying surprisingly modern-sounding ideas in the process. I'm thinking of authors like Philoponus https://historyofphilosophy.net/philoponus and Crescas https://historyofphilosophy.net/crescas
The Stanford Encyclopedia of Philosophy has an entry on the condemnation of 1277: https://plato.stanford.edu/entries/condemnation/
Belief based solely on faith has a pronounced tendency to go off in the weeds. That's the distinction which the scientific method makes, particularly in the tradition of the bacons -- Roger Bacon (13th c.) and Francis Bacon (16th c.), no relation. Each emphasised the value of observation or experimentation.
Second: if you find your premises, or traditional authorities, at odds with observed reality, you might care to strongly favour dismissing your premises or authorities, rather than your observations -- so long as the latter seem to be independently verifiable. An interesting case of this developed most especially in the 19th century, within the field of geology, where the record of the stones was found in marked difference to the record of the scripts, particularly the biblical record. Noted geologists spent not inconsiderable time attempting reconciliation of these records. There was no reconciliation possible, of course, one of those records was simply wrong.
Third: if you've found a persistent discord between observation and theory, then it's quite likely you've illuminated a lacuna in your knowledge or understanding. Again to geology: it was clear from the geologic record that the Earth was at least some hundreds of millions of years old, but no known force or energy could explain the observed temperature of the Earth's interior. The answer to this turned out to be previously unknown form of energy potential and release: radioactivity. That also happened to provide the clock by which the Earth's age could be determined, as well as the mechanism by which geology is ultimatley founded: plate tectonics. Not fully accepted, it turns out, until 1965, though it's now considered the fundamental organising principle of geology. Which gives us a fourth lesson:
Fourth: You can study a thing for a long, long, long time before you come to a proper understanding of it.
Fifth: Study of ancient authorities isn't wholly useless. I advise people to look to philosophy, especially, if not for truth then as a record in how the truth, and error, are arrived at, over time.
And finally: it's not enough to come to the correct answer, but to come to the correct answer through the right chain of reasoning. Science is structured knowledge. It's not a dry recitation of facts, but rather, the structure, through which, those facts become evident. If not self-evident, then building on observation and mechanism. Precursors of current understanding, absent the underlying structural foundation, are interesting, but are not science as it's properly considered.
The traditional example is that if I cross the street everything will be different. Parallel universes is related to this nonsensical multiple-parallel-futures assumption.
For those who are interesting in seeing things as they are instead of piling up nonsense upon nonsense, consider a process of evolution. There is no multiple versions of the same species because they might turn the other way. Everything happen as it happen (everything is the way it is because it got that way). It is an unfolding of a single process.
There is no future(s). It is an "organic" growth (like growing of a tree). Future is a concept of the mind. A meme. So are parallel universes. There is nothing parallel to what is. At least outside of one's head.
Code review for anything more complicated than a script has helped the quality of what I write. It also ensures that there are other people who have at least seen the code that I wrote. Even if they don't fully understand it, they are at least empowered enough to wade through it if need be.
> ... you have to explain the difference between plain text and word processing. And text editors. And Markdown/LaTeX compilers. And BiBTeX. And Git.... the barrier to collaborating on papers in this framework is simply too high to overcome. Good intentions aside, it always comes down to, "just give me a Word document with tracked changes"
> ... Google Docs excels at easy sharing, collaboration, simultaneous editing, commenting, and reply-to-commenting.
Using google docs is not "good enough" practice for scientific computing. How will you embed parts of your csv files in the report, how will you at the same time have it included in the version control system? Using plain text file toolchains on the other hand could solve all that. Now they mentioned Markdown and Pandoc, but no mention of the ReStructuredText, Orgmode files or Asciidocs, so it makes me wonder why would they recommend version control and google docs?
The current iteration is at http://cs61a.org - taught by two undergrads for the summer.
The most recent two iterations taught by professors are http://fa16.cs61a.org and http://sp17.cs61a.org
And there's TerichDB: https://github.com/Terark/terichdb
How are they related to each other?
Also TerichDB calls itself open source but then includes this: "TerichDB is open source but our core data structures and algorithms(dfadb) are not yet."
If the core algorithms of TerichDB is not open source then is TerichDB even usable? Are you going to open source the core algorithms?
All this is rather confusing.
Terark built a new storage engine for Database and Data Systems based on the Succinct Nested Trie data structure. Our technology enables direct search on highly compressed data without decompressing it. Thanks to that we obtain >200X faster performance and more than 15X storage savings (better than Google's LevelDB or Facebook's RocksDB).We are a Y Combinator company (W17).
That match(int) technique seems nave. How would you handle the function arguments/application. The size of the function would also, presumably, large enough to be either very bad for one's icache or requiring more optimisation, which on the assumption that my previous paragraph is correct would make this a wasted effort?
Has this been profiled at all?
GHC Core is not a subset of Haskell, it's just a simplification of it. The same thing but explained with fewer words (constructors).
That being said, we completely agree that Spineless Tagless G-Machine is full of badassery. Just imagine being able to reduce any Haskell app to, like, 8 different instructions. Something about that fascinates me, even though I'm not quite sure what it is.
I wonder what it would take to add a wiki to lambda-the-ultimate.
I would love to see two more things:
1. Propagation of uncertainty. I often yearned for a calculator that automatically propagated uncertainties for me while writing my (high-school) lab reports. I think it would be life-saver functionality for many students at the very least.
2. True high-precision. I don't know how Insect works under the hood (so maybe you are already doing it this way) but instead of using high-precision results of the operations, store the operations themselves and calculate the final result at the end with the desired amount of precision.
I am aware that both requests requires a tremendous amount of change so you might as well think of them for the next major version! =)
Something I wish I'd had when I was studying Physics.
Old discussion: https://news.ycombinator.com/item?id=13909631
Units is a bonus but really just the calculator is less frustrating than anything else I've ever seen. Google search is too unpredictable - morphing into a typical confusing 1970's style keypad design after your first calculation. I tried a couple of desktop applications and they were no better either. Come one calculator app designers - stop trying to copy a physical calculator. Those already have a terrible design and computers don't have the same constraints or freedoms as them.
Even better, it can act like a desktop Windows app simply by saving the page in Chrome! Beat that for latency, Google!
This can easily run local. If you prefer online repl, it's available on repl.it . There you can keep your scripts in the cloud for later, with rudimentary versioning.
 https://github.com/hgrecco/pint https://repl.it
6 megabytes to bytes 6 MB -> B = 6000000 B
6 mebibytes to bytes 6 mebibytes -> B Unknown identifier: mebibytes
e*1e15 e 1000000000000000 = 2718280000000000
1 Pa * 2 m^2 / N 1 Pa (2 m^2 / N) = 2
1 Pa * 2 m^2 1 Pa 2 m^2 = 2 mPa
1 cup of butter -> g
exp(2 (kg / s)) Conversion error: Cannot convert quantity of unit kg/s to a scalar
Is there any way I can save a list of variables to file and then reload them?
I also would like to vote for supporting imaginary numbers (Issue #47).
Still, it's pretty good.
1 / (12 c) = 2.7797e-10 s/m
Plain and easy to understand interface and excellent use of colour and space. Two suggestions:
1. While I doubt it'll be used very much, consider adding calories for completeness if nothing else:
1000 kcal -> joules
1000 Cal -> joules
2. Change Variables to Constants. I think this is more in keeping with standard jargon.
: cal is based on the gram while Cal or kcal is based on the kilogram.
sqrt(1/(eps0 mu0)) ->m/s sqrt(1 / (eps0 mu0)) -> m / s = 299833000 m/s sqrt(mu0/eps0) ->ohm sqrt(mu0 / eps0) -> = 377.031
3:20:36 / 26.2
7:35 * 26.2
US Engineering units of energy would also be help certain people: such as BTUs British Thermal Units etc.
Can you stop the cursor blinking?
If you do something like pi 1e20, I think it should print out all the digits it has instead of printing zeros.
If not, is the conversion logic in an npm package?
I tried "4 tbsp to oz" and it interprets oz as mass instead of volume. Google correctly gives me 2 as the answer.
$ cat .units period(len) units=[m;s] 2pi*sqrt(len/gravity) ; (period/2pi)^2 * gravity $ units 2980 units, 109 prefixes, 97 nonlinear units You have: period(20cm) You want: Definition: 0.89729351 s You have: period(20cm) You want: ms * 897.29351 / 0.0011144625 You have: period(2ft) You want: ms * 1566.5419 / 0.00063834872 You have: 5 GiB You want: bytes * 5.3687091e+09 / 1.8626451e-10 You have: 5 hundred million You want: Definition: 5e+08 You have: tempF(100) You want: tempC 37.777778
I like the idea of keeping units, but I'm not sure this makes things easier:
V * A / J V (A / J) = 1 VA/J You have: V*A/J You want: Definition: 1 / s
round(pi * 1000000000000000000) = 3141590000000000000
(similar comment by @btown down there about e)
Unknown identifier: rpm
U = 1e-6/60 * kat U = 1.66667e-8 kat
1 kat -> U 1 kat -> U = 1 kat
1 month -> days 1 month -> d = 30.4167 d 1 year -> days 1 year -> d = 365 d
Remember Graphing Calculator? That could use such features.
Autodesk Inventor understands units in formulas, but mostly for length and angle. Everything becomes meters internally.
Wut? Looks like some strangeness for the parser.
I'd be cool to have this as a python script somewhere but I am not quite sure wether I would visit this site whenever I need to calculate some physical units, especially since google already covers most of my needs
dont get me wrong, I think otherwise it looks and feels great and is easy to use but I dont know who will use this.
1 mile / 2 km 1 mi / 2 km = 0.804672
1/2km 1 / 2 km = 0.5 km
Note: if you're on MacOS and are using the supplied 'units' utility, that is BSD, not GNU units. You're going to want to install gunits from Homebrew.
Any hope of an Android app version soon?
sin (30 rad) sin 30 rad Unknown identifier: sin
Meaning of life Meaning of life Unknown identifier: Meaning
7.8L/100km -> miles/gallon 7.8 L / 100 km -> mi / gal Conversion error: Cannot convert unit L/km (base units: m) to unit mi/gal (base units: m)
Which means it won't work offline or when Google inevitably shuts down the API.
import speech speech.say('Hola mundo', 'es_ES')
import sound r = sound.Recorder('audio.m4a') r.record(3) # seconds
text = speech.recognize('audio.m4a', 'en') # sent to Apple servers
You almost never want to use the system Python. Use pyenv to install the version of Python you want, and then create a new virtual environment for each project.
It appears to get security fixes only - few bug fixes.For example, the LTSB start menu is completely broken - there is no search on it and it takes 3-4 seconds to show. Presumably related to the lack of Cortana, but who knows. In any case, it's been a widely reported bug in ltsb for a long time and it's a pretty fundamental feature in windows.
I could see the point in being on the LTSB branch of windows 7 because that OS is done. But windows 10 isn't nearly finished yet and very rough around the edges. Being on LTSB of such an old Win10 release is like being on the release day version of a AAA game while everyone else runs the fixed version that came out 3 months later.
It is a proven model in the Linux world. The unpleasant thing is that Microsoft shunted millions of users on to the equivalent of Fedora or non-LTS Ubuntu without bothering to explain this.
1. Turning off Cortana using these instructions: https://www.howtogeek.com/265027/how-to-disable-cortana-in-w...
2. Turning off most of the visual effects under "Adjust the appearance and performance of Windows." (I left "Smooth edges of screen fonts" and "Enable Peek" checked)
The combination of these two change feels like a whole new computer.
My assumption is that this Eco-system will rather soon bring the clumsy Microsoft attempt to become google one stack-layer closer to the user to a horrific end.
I wonder if I can get Windows 10 LTSB preloaded by Central Computer, the retailer. They installed Windows 7 with no bloatware for me. (I asked for that, and the invoice actually reads "no bloatware")
I had a Lenovo S500 dektop that I just reinstalled. Non-SSd. I was getting a '100% disk usage' issue right of the bat. I installed 'windows10startfresh' and now it's pretty sweet - pc is not slow, and such. The 'windows10startfresh' installs windows without all the lenovo crud apparently.
Go old school and customise the default profile in the build for best results!
The best part is that if you have education volume License You get both EDU and Ent LTSB editions!
>In fact, the default Start menu on Windows 10 LTSB doesnt even include a single tile. You wont find any of those new Windows 10 apps installed, aside from the Settings app.
That sounds fantastic. I use absolutely none of those "features" and would love to be able to remove them from my copy of windows 10.
Given that it's basically free to register as a commercial entity (in Germany it costs 40-50, and iirc a British LLC can be formed for less), can one do so, and then apply for said subscription program?
edit: are offerings like https://www.lizengo.de/microsoft/windows-10-enterprise actually legit, and can these be used to activate a LTSB installation?
So, is the bar so much higher or is it something else?
I used Slicehost and EC2 from Europe with total disregard for latency because I never had much users. For my (mostly internal) servers it was fast enough.
And even now, I have the cheapest Scaleway machine with a public-facing website that seems to be running fine a small Angular4 + Java backend app.
I would also like to see a graph showing the latencies between all the AWS regions. Which I guess will show that AWS regions do have a logic and that having servers next to your users makes sense.
Still, why worry about this from the start when your monthly 'budget' is less than the price of your coffee breakfast and you get unmetered bandwidth?
How comes we're 5 times above that ?
Is the latency introduced by routers ? If yes, then that quite doesn't make sense : I doubt there is any in the middle of the Atlantic.
Is the routing that inefficient that the data travels 50000 km ?
I took about 5 sites from a $50 a month shared cPanel plan that included a few WordPress blogs and some custom sites and put them on a $3 a month scaleway instance and haven't had a bit of trouble.
A few years ago, dedicated Raspberry Pi hosting was a bit of a thing for a bit.
I looked a bit, a few months ago, but I didn't turn up what appeared to be a clear winner of a choice.
If ad companies fix something please fix the I searched for something and bought it but I get adds for it for the next 4 weeks. That bugs me.
I've already paid for Youtube Red and couldn't be happier.
I found the link  here on HN.
Said differently the relevance that can be extracted from your specific email is less than the cumulative knowledge that Google has about you from other sources.
This seems like one of those decisions that is a net negative for functionality in favor of quelling some misguided privacy concerns. Hopefully this doesn't lessen the quality of ads by much.
Now Google knows:- all your searches (know your interests), - a great percentage of the pages you visit (ads and analytics)- all your contacts and how frequently you connect to them (metadata in gmail)- the places you visit (geolocation in Android)- Google logins (know the sites that interest you the most)
Your email contents is completely unecessary.
Interesting timing with the story earlier this week about Wal-Mart telling vendors to stop using AWS
I wasn't expecting this from a company that makes most of it's revenue through advertising. Sets a cautiously positive precedent.
For example, if I write "I love coffee" in an email, am I more likely to see a Starbucks ad when I visit watch a YouTube video?
Companies can make money without tracking you 24/7 and reading all of your private content. They just choose not to, because it's easier, and then spread the propaganda that those things are "needed" to stay in business.
That would certainly cause an enormous loss of goodwill, but.... imagine this scenario:
Google has some slow growth quarters, they need to keep the numbers up for shareholders. They start to examine what they can squeeze. Gmail costs them X (hundreds?) millions per year, but doesn't gain much from it...
Certainly its unlikely, as it is also a SSO tool, etc. Still....
Anyway this may solve the tragedy of the commons situation we're in now and allow us to move away from the technology war of ad blockers, ad blocker blockers, ad blocker blocker blockers, etc.
edit: removed comment on clarity of Gsuite vs Free due to downvote brigade.
I used to see Google as a stalker. How naive I was. There are billions of people being stalked, exploited, and the whole process is automated. It comes to me that users are not victims of stalking, they are lab rats.
Now Google stopped reading emails for ads, but they'll still read for other purposes, which still makes me as a user feel insecure. I value my privacy, I have my dignity, I shouldn't be a lab rat that they can just observe however they want only because they provide free cheese.
Even if I start using a paid account to stop them from reading my emails (assume paid account with better privacy protection is possible), I couldn't stop them from reading others'. Stopping the data collection of one user won't change the situation, they still have other lab rats' data they could collect and analyze, which enables them to learn or predict other rats' behaviors.
The worst thing is, Google is not the only company doing this right now. Surveillance technologies are developing, it's like every data company has grown their teeth and become more thirsty for blood.
I tried searching for a similar study for NYC but all I found was old articles from years back.
It doesn't look like the MTA shares any measurements of temperature in their data feeds: http://web.mta.info/developers/download.html
Does anyone have ideas on how we could get this sort of data for NYC subways?
Too late now, but I wonder if they considered district cooling, where e.g. cold water from rivers is used as an alternative to air conditioning. Seems to be used successfully in my hometown of Munich: https://www.swm.de/english/m-fernwaerme/m-fernkaelte.html
And instead of ventilation shafts you would probably need active heat pumps
If a major heat component is braking, then locating the additional cooling capacity where breaking is heaviest (presumably on inbound station approaches) might offer advantages -- at the very least this reduces the total treated area for maximum effect.
Given the possiblity of ground-based thermal banking, and the long-term nature of the issue, if any amount of coolant could be circulated through the thermally-affected clay, and made available for seasonal heating needs elsewhere in the city, that might be a net win.
I'm familiar with geothermal energy projects elsewhere (borehole projects in Australia, the Habernero project) where the problem is actually inverted: themal extraction cools the strata around a borehole, over the course of about 40-50 years, to the point that no further useful heat can be extracted.
The thought also occurs that the steel rails themselves are thermally conductive and might be made a part of the cooling system. Not a tremendous radiative surface, but a long conductive length. Poorly placed, that is, low within the tunnel, rather than high, for effective heat extraction though.
Much more of the power would be preserved, leading to less heat.
Regenerative breaking sounds like the quickest and cheapest way to address the problems - not that any change would be 'quick' or 'cheap'.
Run ventilation on high during winter and keep temps quite low in the tunnels.
Run ventilation on high on cooler nights.
Install cooling tubes in the surrounding clay and cool it directly. Either from above, or from the tunnel itself, a ground-source heat pump (geothermal heat pump) to pull heat from the clay. These can be powered by the cheapest available power, likely solar on sunny days in the future.
Upgrade motors to highest efficiency available. This could halve the waste heat from the motors.
Regenerative braking: if it is too complex to put the power back on the grid, build large "electric kettles" and dump it into a vat of water with resistance heaters. The water vat could be part of a water main so it would be constantly refreshed, and result in slightly warmer water for water users.
Instead of ice in the cars, cool brakes and motors that exceed 100C with water, by boiling the water. This absorbs terrific amounts of heat per kg water.
As the train arrived it gets slowed by a spring or similar system which is then used to propel the train forward once it needs to depart.
Also, what about just slower trains? The heat produced while acccelerating or braking is probably not exactly linear with the speed of the train.
You'd end up with lots of ventilation grates on the sidewalks on the surface, but that seems like an easy and space efficient solution.
I don't live in London so anyone with regular riding experience please correct me if I'm wrong.
Use the heat from air in tube to warm the water that is anyways getting pumped up. (Too lazy to do the math to see if this would make any difference)
On a related subject, the temperature of any cave will remain almost constant at the location's average annual surface temperature. So another option would be to focus on not producing so much heat in the entire city in the first place, to lower the average temperature.
Add cool to the tunnels, rather than taking heat out.
Build liquid air plants above ground, 2 or 3 floors up in the air so that the heat of the pumps is released above street level and the noise can be kept away from the street. Feed the liquid air into the tube tunnels through insulated pipes which takes up far less volume than air vents. Let gravity bring the liquid air down the pipes. Release the liquid into the tunnels near platforms where the air pump effect of moving trains caused lots of air circulation. Also the car doors open on the platforms.
Since you are liquifying the air, not just the oxygen, it can be safely released anywhere in the tunnels. And if your air intakes are high up you will actually be improving the air quality in the tunnels as well, i.e. cleaner air flows in.
> Whether this can be viable is still being looked at, bearing in mind that they already struggle to fit air conditioning units into tube trains, finding space for the ice blocks is going to be even more of a headache. And not to forget that the extra weight means more energy needed to drive the trains, driving up running costs.
This can be worked around, do the chilling on the wayside, not on the train! At each station, run chillers that can reject waste heat on the ground -- and chill a nontoxic liquid glycol/water mixture to -40 C. Commercial equipment exists to do this already. Have air/liquid heat exchangers on each EMU, along with glycol storage tanks, a pump, and sensors to keep track of the temperature/volume of the glycol in each tank. On the roof of the EMU, install large-diameter quick-mating liquid connectors, along with fiducial marks. At each station, wayside equipment uses computer vision to locate the fiducials on the EMU, mates with the connectors, does a pressure test to verify the integrity of the connection (squirting glycol is a no-no), pumps out all the warmed glycol (and replaces it with cold stuff), and disconnects. This can be done during the dwell time if the connectors and refill tubing are of large diameter.
Cooling loads are on the order of 50 kilowatts per EMU and inter-station times are on the order of 10 minutes, which means 30MJ per EMU. The ending temperature of the glycol will be on the order of 10C (you need a temperature differential to ensure heat flows from the glycol to the air), its initial temperature will be -40C -- a temperature difference of 50K. Glycol/water mixtures have a specific heat of around 3.2kJ/(kg * K), so we have:
30 MJ = (3.2 kJ / kg * K) * mass * 50 K
leading to a mass on the order of 200 kg, which is quite tolerable for a rail vehicle. The tanks for the glycol can be spread around the car and can be arbitrarily shaped (as long as fluid can be circulated and offloaded) to deal with other constraints. There's no phase changes involved, which makes the heat exchange work non-annoying; there's just liquid glycol and air. EMUs don't need to haul around an air-cooled chiller, all the equipment on the EMU is reliable, does not consume much electricity, and is extremely tolerant of vibration and the harsh environmental conditions aboard a rail vehicle.
If you want to reduce mass further and are willing to accept some more complexity, it might be sensical to use a small chiller on each EMU that uses the glycol for heat-rejection. What does this give you? It means that you can still generate a constant chilled water temperature of 10C, but let the end temperature of the glycol go above 10C -- and more temperature range on the heat storage fluid means more heat energy can be dumped into it. When the glycol temperature gets above 10C, turn on the heat pump to create chilled water at 10C, and reject heat into the glycol (stop before it boils). Liquid/refrigerant heat exchangers are much smaller than air/refrigerant exchangers, so if your compressor isn't obscenely heavy, you can likely save some weight. If you can use the glycol from 10C (when heat won't passively flow from the glycol to the air) to 60C (a reasonable condenser temperature) -- that's another 50K worth of temperature difference, which means our 200kg load of glycol can be cut in half.
That's a very good metaphor for our planet.
What this seems to be calling CTO is more akin to a most senior engineer/fellow/hacker. I've seen it called Chief Engineer before. That's the person the the CTO should be able to hold their own in a conversation with but being that person would seem be unlikely for an exec team member as the business grows.
*Titles are more to less meaningless unless there is internal conflict or you're interacting with someone external, ignore that bit and think in terms of roles
I had a general line/rule of thumb. At < 6 engineers, you had to write code regularly as the team was so small it couldn't carry the weight of a member of the tech team who didn't commit regularly.
At 12+ engineers, you didn't have time to in order to do the other work (management, prioritisation, reports, strategic thinking, etc.), well enough.
At 6-12 engineers (where I spent most of my time), I didn't have time to write code, but had to in order to keep the company moving. Cue 60-100 hour weeks for 10 years. Yeah.
I went to quite a few CTO events, and in all honesty, it was a surprise to many of them that I both knew how to code and that actually spent any time doing it. I thought it was insane that there are CTOs - many of them - that aren't interested in the practice of creating technology at a hands-on level, but I could also understand how that happened: in my location (London), it's quite normal for non-tech CTOs to pick up from founding CTOs after a few years.
It's a weird situation to be in, and eventually a couple of years ago I decided to evaluate what I wanted and wrote a list of what I liked and didn't like about my job as a CTO.
I realised all the things I enjoyed were actually the responsibilities of a senior engineer, and all the things I didn't like were the management and board duties of being a CTO. Slept on it for a week, resigned, applied for senior roles, and generally am much happier (2 years on).
It's worth really thinking about what you want from the role. If you're a co-founder, you can shape it, but you have responsibilities to your investors, wider board, exec team, managers and developers. Most importantly, your have responsibilities to yourself.
Choose your own adventure when it comes to being a CTO, but choose wisely and carefully.
Outside of executing the core strategy you can take an opportunistic approach to:
1/ create more strategic options for the company
2/ cut costs by automating or re-engineering business processes
3/ deliver an unfair technical advantage over competitors
4/ improve reliability of service
5/ introduce more technology in the rest of the business (sales, marketing, operations, ..)
Startup CTO's tend to combine many different roles as there are more roles than people to fulfill them. Generally startup CTO's wind up also doing product management, engineering management (people, culture), recruitment, SCRUM master, IT, support, BI, architecting and programming.
What you actually wind up doing depends on the needs of the business and the available talent in the company to delegate these roles to.
A CTO role is essentially that of a Technical Product Manager. What distinguishes it from a traditional TPM is that, to do the role well, you also take on the aspect of being the technical "moral authority" for the company, setting the de facto engineering culture, and creating a compelling vision that goes beyond product management.
If you're CTO because you co-founded the company or you joined at an early stage and you just happened to be the most senior engineer at the time, you really have no idea what being CTO of a large company is like.
I have struggled with these roles and names, similar to Bryan Helmig, the author of this post. I have consulted with some former bosses, that now lead engineering teams at some of the larger companies here in the bay area, and have come to the conclusion that most CTOs are really VPEs, just using the wrong title, but in the end, the title does not really matter. Since it is extremely flexible in definition.
For example "Don't innovate in the management structure." Sure if you're an average person, then don't. But some companies have (Valve, young Google) and shown outstanding results.
I'm very skeptical of reducing the CTO title to rules like this. This is garbage conversation fodder, it appeals to our weaker human side to create a facade of self-improvement but none of us are gonna remember this article in 2 months.
I'm having the same considerations and am writing about challenges CTOs of startups of different sizes face on a daily basis on http://cto.pizza
The concept is simple: we talk about your growth, team and tech challenges over a pizza. Let me know of any of you might be interested in grabbing one.
I'm based in Paris but we could figure something out over Skype or something if you have interesting stories to share!