Many of the other formal verification tools make it very easy to have your implementation drift or be entirely unmoored from your specification, but they let you keep working at a level of abstraction from your problem that's still very familiar. Though things like SMACK and DiVinE are helping to decrease the gap between spec and code.
Using things like Coq + program extraction brings the overlap between spec and implementation into much, much closer alignment, but brings with it additional problems. Writing a complex program in a very abstract language further away from an engineer's typical problem domain, being fairly limited in types of languages supported for extraction, and/or still having to have an awful lot of faith in the extractor (which itself is unverified as near as I can tell) are all things that are currently keeping me out of Coq for immediate use cases.
The good news though is that there's a lot of fairly high-profile work being done (like Verdi) to increasingly bring formal methods to increasingly complex software problems in ways that make using formal methods more approachable and usable, and that's truly wonderful.
- packaging via OPAM to separate framework code and system code
- support for specifying and proving liveness properties
- support for proving Abadi/Lamport style refinement mappings between systems
The following workshop paper gives an overview of current and upcoming work:http://conf.researchr.org/event/CoqPL-2017/main-verification...
I don't watch much BBC so i don't know if that changed lately. It's a change for the better imo.
(It's a video a few minutes long and worth watching. I rarely watch videos on news sites, but glad I watched this one.)
That's a beautiful bridge though.
Some top tips; if you decompress the streams first, you'll get something you can read and edit with a text editor
mutool clean -d -i in.pdf out.pdf
Text isn't flowed / layed out like HTML. Every glyph is more or less manually positioned.
Text is generally done with subset fonts. As a result characters end up being mapped to \1, \2 etc. So you can't normally just search for strings but you can often - though not always easily find the characters from the Unicode map.
If you need more, the "free" (trade for your email) e-book from Syncfusion PDF Succinctly demonstrates manipulation barely one level of abstraction higher (not calculating any offsets manually): https://www.syncfusion.com/resources/techportal/details/eboo...
"With the help of a utility program called pdftk from PDF Labs, well build a PDF document from scratch, learning how to position elements, select fonts, draw vector graphics, and create interactive tables of contents along the way."
Of course, PDF is intentionally so weird: it was a move by Adobe because other companies were getting too good at handling postscript.
Embedding custom compression inside your format is seldom worth it: .ps.gz is usually smaller than pdf.
it's used everywhere because you can do everything with it.
This also leads to the problem where you can do anything with it.
so each industry is kind of coming up with their own subset of pdf that applies some restrictions in the hopes of making them verifiable.
the downside is that these subsets slowly start bloating until they allow everything anyway.
i'm looking at you PDFa. grr.
Thanks for the write-up, Max! I want to clarify something though: how do you handle and account for preemption? As we document online we've oscillated between 5 and 15% preemption rates (on average, varying from zone to zone and day to day) but those are also going to be higher for the largest instances (like highcpu-64). But if you need training longer than our 24-hour limit, or you're getting preempted too much, that's a real drawback (Note: I'm all for using preemptible for development and/or all batch-ey things but only if you're ready for the trade-off).
While we don't support preemptible with GPUs yet, it's mostly because the team wanted to see some usage history. We didn't launch Preemptible until about 18 months after GCE itself went GA, and even then it involved a lot of handwringing over cannibalization and economics. We've looked at it on and off, but the first priority for the team is to get K80s to General Availability.
Again, Disclosure: I work on Google Cloud (and love when people love preemptible).
Disclaimer: Paperspace team.
 - https://cloud.google.com/tpu/
Then you scale up to the cloud to do hyperparameter search.
There is a notable CPU-specific TensorFlow behavior; if you install from pip (as the official instructions and tutorials recommend) and begin training a model in TensorFlow, youll see these warnings in the console:
FWIW I get the console warnings with the Tensorflow-GPU installation from pip, and I verified that it was actually using the GPU.
This is with a small(ish) network of perhaps a few hundred nodes... should I see a speedup for this case, or are GPUs only relevant for large CNNs, etc.?
Like in Chiang Mai where the street food is just to the east of the square wall area, I wanted to mark that as having street food, because that would be useful to the inherent tourist site visitor, but there's just a huge "tourist" zone and this can't be done. "Tourist" is only so helpful to tourists :)
This is a unit of journalistic measurement I have never come across before.
Edit: 2min of googling and I find a not very well know 20km impact crater dated at exactly the right time (mid 15th century): https://en.m.wikipedia.org/wiki/Mahuika_crater
(Ice core evidence shows a pair of eruptions around the 1460s, likely the cause of major famines across the planet in following years. The locations of the eruptions are still unknown, though.)
Cool stuff. Really drives home that we are guests of this planet.
"This is a video camera, and this is the precise model that's getting this incredible image quality. Image quality that holds up to this kind of magnification. So that's the first great thing. We can now get high-def-quality resolution in a camera the size of a thumb."
"But for now, let's go back to the places in the world where we most need transparency and so rarely have it. Here's a medley of locations around the world where we've placed cameras. Now imagine the impact these cameras would have had in the past, and will have in the future, if similar events transpire. Here's fifty cameras in Tiananmen Square."
"There needs to be accountability. Tyrants can no longer hide. There needs to be, and will be, documentation and accountability, and we need to bear witness."
ALL THAT HAPPENS MUST BE KNOWN
-From "The Circle" by David Eggers
No, scientists have developed a prototype which can take fuzzy photos of barcodes.
They then go on to tell you what would be necessary to have their device equal a present day sensor in a phone, but they haven't made one yet.
In fact, no estimate is given for when this technology might be competitive with CMOS sensors. The article just points to his previous work as proof he can get some of his ideas to market.
Relevant XKCD: https://www.xkcd.com/678/
I am excited by advances in camera technology, but this headline is peddling research as a pending disruption to the industry, and I don't see any evidence of that in the article.
Also here is my TL:DR summary of it if you're still trying to fight through the pay wall:
There is a thing called grating coupler that works like little high frequency antennas that receives light signals. When you put a whole array of them you will be able to do various scans of light signals to simulate the camera pointing at different direction, or fisheye, telephoto effects without the need of tilting or moving the surface of the array. The underlying computation relies on the ability to calculate and control the timing of signal travelled from each antennas, plus some classic signal interference and phasing issues. An 1cm x 1cm array will contain 1 million such couplers which would create a similar sized image as an iPhone 7 rear camera, but since there is no lens involved, the camera can be made a lot thinner.
Further, the list succumbs to the cardinal sin of software security advice: "validate input so you don't have X, Y, and Z vulnerabilities". Simply describing X, Y, and Z vulnerabilities provides the same level of advice for developers (that is to say: not much). What developers really need is advice about how to structure their programs to foreclose on the possibility of having those bugs. For instance: rather than sprinkling authentication checks on every endpoint, have the handlers of all endpoints inherit from a base class that performs the check automatically. Stuff like that.
Finally: don't use JWT. JWT terrifies me, and it terrifies all the crypto engineers I know. As a security standard, it is a series of own-goals foreseeable even 10 years ago based on the history of crypto standard vulnerabilities. Almost every application I've seen that uses JWT would be better off with simple bearer tokens.
JWT might be the one case in all of practical computing where you might be better off rolling your own crypto token standard than adopting the existing standard.
Why not? If it's an API meant to be consumed by a server I don't see what the problem is.
It's a SO article on security for web transactions.
Given we're talking about APIs, we avoid many of the UX problems, but it feels like taking on a different set of problems than just using a bearer token. It does provide baked in solutions for things like revocation and expiry though.
You can learn and run automated tools for 6 months and end up knowing 1/3rd of what a great pentester knows.
If you want to know you can resist an attack from an adversary, you need an adversary. If you want to know that you followed best practices so as to achieve CYA when something bad happens, that's a different story.
But honestly the security picture is so depressing. Most people are saved only because they don't have an active or competent adversary. The defender must get 1,000 things right, the attacker only needs you to mess up one thing.
And then, even when the defender gets everything right, a user inside the organization clicks a bad PDF and now your API is taking fully authenticated requests from an attacker. Good luck with that.
Security, what a situation.
Can somebody explain this?
It seems like it would be a lot of work to implement the suggestions here. At what point does it make sense?
I'm finding issues like API servers hanging/crashing due to overly long or malformed headers all the time when I work on front-end projects.
Programming in a language with automatic range and type checks does not mean that you can forego vigilance even with the most mundane overflow scenarios: lots of stuff is being handled outside of the "safe" realm or by outside libraries.
For example you can sign session IDs or API tokens when you issue them. That way you can check them and refuse requests that present invalid tokens without doing any I/O.
Under the terms of the Unified HN Convention, agreed 2015, every thread about CockroachDB must by law contain a series of complaints about the name of the database. Please post yours below.
To help you get started, here's some prompts you might use:
"My Enterprise CTO will never go for something named..."
"I just think the name sounds really disgusting and off-putting..."
"Marketing a product is at least as important as making a product, and this is bad marketing..."
As it stands, it seems to me that CockroachDB is mostly just reinventing Spark from scratch, except maybe from a more OLTP-centric perspective.
Turning off Wi-Fi before leaving home and office helps. Apparently few people do that. A customer in the tracking industry (beacons estimating people in stores) told me that about 80% leave Wi-Fi always on.
I hope I'll get that patch soon. The last update for my Sony phone was the security update of May. Nothing on June. I guess that most Androids didn't and won't get anything.
It would be great if there were a published list of exactly which devices are vulnerable, or a way to check your device for whether this part was present. Is there anything like 'adb shell lspci' I could run to find out whether my devices have the broadcom parts? I know my Nexus 5x has a QCOM SoC, so I assume it lacks broadcom WiFi. But the rest of the family's devices -- what of those?
> In its security bulletin, Google rated Broadpwn as a "medium" severity issue, meaning the company doesn't view it as a dangerous vulnerability, such as Stagefright.
> Users that didn't receive this month's Android security patch should only connect to trusted Wi-Fi networks and disable any "Wi-Fi auto-connect" feature, if using one.
What is the point of the second statement?
That slightly contradicts the headline.
We will know a lot more after @nitayart presents at BlackHat.
As a buyer of storage space, if you need storage, isn't it always going to be cheaper, faster and more convenient to use your own hard disks or a specialised service like S3?
Will people actually have an incentive to use Filecoin as a storage service instead of S3? If they don't, then the coin has no advantages over Bitcoin.
Let me know if you have any questions. Happy to answer them!
It's a pretty simple problem with a pretty simple solution. The problem is that local city councils have restricted the freedom to build through excessive zoning laws and regulations in order to increase housing prices for their own private investment benefit.
The solution is to relinquish them of this self-interested tyrant-like overbearing power and set these policies on the national level - basically how Japan does it. The more localized the power, the more self-interest is going to favor a minority of private individuals at the expense of society.
Here in Zurich, there are the same sort of complaints about parking for new buildings going up, however there is now a different trend: Because rent for the parking space is typically charged separately from the apartment's rent, some parking space simply can't be rented out because residents don't have cars.
In case you speak German and are interested in this sort of stuff, the regulations are available at https://www.stadt-zuerich.ch/content/dam/stzh/portal/Deutsch... .
It also shows a table on page 3 that explains how you actually are allowed and required to build less and less parking spaces the closer you get to the city center, so much so that if you look at the maps on pages 6 to 7, you can see that that grey area allows <= 10% of the parking of the white area.
they take care of parking with prefabricated 5 story garages.
Once you're building at larger scales, the cost of materials between a luxury condo and a subluxury condo are different, but not astronomically so.
Developers will go to China, Mexico, etc. and source some really nice stuff very cheaply. Sure, there are exceptions, i.e., materials that are expensive no matter what, but once you figure out a way to use cheap labor, the actual building materials are cheap.
It's similar to luxury cars - the "luxury" part doesn't necessarily cost a lot more (but yes does cost a bit more) but it can be marked up a lot, lot more.
I really like the notebook format but I've yet to come across a browser window so good that I'm happy to give up an actual editor program for it.
If one were to learn a functional language, is Lisp a good choice today? Or is Haskell more appropiate?
My TXR Lisp actually has that function. :)
Oops, I mean accessor.
Intel was sued in Japan (for offering money to NEC, Fujitsu, Toshiba, Sony, and Hitachi,) in the EU (for paying German retailers to sell Intel PC's only) and in the U.S. for predatory (pricing), exclusionary behavior, and the abuse of a dominant position (HP, Dell, Sony, Toshiba, Gateway and Hitachi.) The legal record is pretty clear that Intel used payments, marketing loyalty rebates and threats to persuade computer manufacturers, including Dell and Hewlett-Packard (HP), to limit their use of AMD processors. U.S. antitrust authorities have focused on whether the loyalty rebates used by Intel were a predatory device in violation of the Sherman Act. The European Commission (EC) brought similar charges and imposed a 1.06 billion Euros fine on Intel for abuse of a dominant position.
The sum of these efforts not only killed competitors but it killed innovation in microprocessor design outside of Intel for decades.
Ironically Intel's lack of innovation in the 21st century is a direct result of its 20th century policy of being a monopolist.
* Intel Inside
* AMD Radeon graphics
* Energy Star
* 2x JBL speakers (two mentions of JBL, one's not even a sticker)
* Dolby Digital Plus
...and a few others that depict generic features of the laptop (Do I really need a sticker to tell me I have a webcam on this thing?) Honestly it just looks tacky, like a Nascar car. I'll peel them off some time but yuck, totally tasteless.
Worse still, as you used the computers in real life, all of those stickers degraded into a gluey mess that got all over everything when you touched them.
I still have flashbacks of using a heatgun and alcohol wipes to un-sticker 2 dozen new HP laptops before rolling them out. Ugh.
> With a big grin, Steve looked me in the eye and said, Trust me, I made sure thats in the contract.
Isn't that all there is to it? If you don't want an "Intel Inside" sticker slapped on your computer, you negotiate it in the Contract.
Was Intel that aggressive that they wouldn't sell the chip unless you slapped their sticker on your computers?? What am I missing?
Because of this, I'm sure Steve negotiated a good price on those chips without Apple needing to be part of the "Intel Inside" program to get cheaper CPUs.
The Apple ad from that era that people love and remember is Richard Dreyfus' Crazy Ones, and the author even thinks that they "upgraded to Jeff Goldblum".
I checked the attribution, and there is a person's name on it. Sure, any hack can write and publish and this is probably just another example. But the odd style doesn't even strike me as 'writing the way I think' or writing and publishing quickly without editing. For example, from the 2nd paragraph, "The corresponding low also paints a picture and suggests that the low is nothing but a 97.89% since 11/14/16." I can't gather any meaning from that statement, yet it has oddly specific details.
I am not glad to see this trend and not glad that Google is embarking on this path. I suppose it is inevitable, but unless there is expertise built into this AI that can extract meaning from data on my behalf and present it in a way that is more insightful and interesting than I am, it will become yet another source of chaff I'll have to filter.
Can we at least, please, flag AI generated prose as such?
Google will one day be the arbitrators of news. If it doesn't fit in their world view, whether it's true or not. Will be removed from the results.
I think now is the time to setup a different model and remove their monopoly. Internet freedoms are at stake here.
Do no evil? Yeah right.
If they write local news, will they use social media as their datasource? Other sources?
Really most "news" articles are only a couple of paragraphs long anyway and could be expanded or contracted on the spot to match the interest of the reader.
You think you won't succumb to their influence now, but it'll happen and there will even be "journalists" who are machines that you like. The filter bubble will completely adapt to your every need to make you feel fantastic about reading their copy, humans won't be able to compete.
* Facts delivered with arbitrary fluff words is pointless even when written by a human - it obfuscates the real purpose which is the data * Companies pay humans to deliver articles in most cases and the bias of the writer or the institution that paid for it shines through. I cannot find a real difference between intentional angling by payment or by algorithm * When the day arises where computers could generate actually new, intelligent and thoughtful pieces I for one will be very interested in reading them. Sadly there would be millions of variations that could occur at an astounding pace. We'd then need algorithms to filter the generated content for the things that are really noteworthy. * News at its core is a sequence of facts which begs the question if we really need the cruft around those facts which can often lead to misinterpretation?
Edit: come to think about it, isn't it what Google should be rather doing?
News sites don't even use hyperlinks effectively, let alone audio/video/interaction. We should use AI to replace newspapers, not reporters.
I wonder who's behind these and similar channels.
Combining these presents an interesting opportunity to create "future news" (news that is technically fake until it isn't) thereby owning the news cycle by always being first.
This is what it will become one day. Hope they have something to stop it.
I do imagine further into the future, the automated systems will be "improved" with tone and bias to better fit the tastes of the individual reader, to the detriment of us writ large.
"Only Robot Can Free Information"
Focusing on building robot for reader instead of news provider would be the future.
Or maybe "fake news", until 'elevated' by Google curators?
Maybe Microsoft's Ai bot experiment might offer a cautionary tale.
1) Making board papers more readable. There's a bunch of trusts in the NHS who have a stream of very complex board papers. Something to reduce un-needed complexity would save a lot of time and potentially money.
2) Converting all important documents to an Easy Read version. There are a bunch of writing styles for people with learning disability, low IQ, or low literacy. Easy Read is one. A company like Google focusing on this would be good because they'd improve the evidence base; they'd bring a bit more standardisation; and they'd improve access to information for many people.
On the other hand, older recordings with more dynamic range might sound thin at low volume, but are much richer at higher volumes (you can hear the individual instruments better and feel the space in the sound). If you try comparing older and newer masterings at a good volume the newer mastering usually sounds kinda mushy.
With a high dynamic range, a headphones listener may feel the need to adjust the volume several times in a song to boost the clarity of softer sections or to make louder sections more comfortable to the ears, depending on the listening environment. With a "loud," low dynamic range, however, the listener need only adjust the volume once, as the whole track is roughly the same volume. In other words, the listener is in control of the volume, rather than the engineer.
Recently I found out about the volume compressor, which with a single check box does exactly the right thing. I asked myself "why the heck isn't this box checked by default?" I think the answer is with audio purists wanting to stem the loudness war.
When reading about CD mastering maxing out the volume, It seems like it is the right decision. Most people do want the loudest setting, no mess with the EQ, compressors, etc. Only a tiny population wants to preserve the fidelity of the amplitude.
Coupled with the data from this page  there is no point in going too loud anyways, that's why you have gain / volume control. I'm not sure how I feel about streaming services implementing extra processing tbh. Spotify is the worst culprit adding limiting which can significantly change the sound of a recording.
I just wish other engineers would have more pragmatism in this industry, way too much overcooked and distorted music around.
If you have the right source material, you can brickwall the hell out of tracks and not notice the distortion.. or perhaps the distortion will even add pleasant artifacts. One of the more prominent issues with making things stupid loud is intermodulation distortion, but that really only becomes noticeable when you have pure tones or vocals being mashed into the limiter. If the source material is already distorted (think screechy dubstep synths), then it probably don't matter.
But yeah, when you're dealing with more traditional kinds of music, which often times involves vocals or a lot more subtlety to the timbre of the instruments, brickwalling is probably not the best call. It seems that the Search and Destroy "remaster" sounding terribly distorted was intentional.. but IMO it's not very listenable nor does the distortion really bring the grungy character than I think they were going for. It just sounds bad.
* it's a step in musical production where having experience, skills and contact with the artist matters. Not all compressor and limiter are created equal and the default value you use in your media player may not sound as good as what an audio engineer might have done..
* Not everyone have good hardware and a good environment to listen to high dynamic range music like thoses listening to classical music / jazz / Philip Glass, so theses business decision to increase volume for the market made sense at that time I think. Audio engineer simply took profit of having a technically better medium (CD) to make audio sound better (from what I've read theses techniques did not work well on vinyl)
* Loudness wars didn't have an effect on old records since as one can see in this article, we could find the old dynamic ones (and so we actually have the choice of listening to the old untouched record, or the new compressed-for-the-market record, and that is a good thing !)
* Theses music stats (mean RMS, peak RMS, max mean RMS) look at instantaneous dynamic, but a look at the overall dynamic of a song is also very important ! A good article on this topic stating that songs did not lose overall dynamic range that much : ['Dynamic Range' & The Loudness War, 2011] http://www.soundonsound.com/sound-advice/dynamic-range-loudn...
Loudness is a bastard. There is a reason, why all the pros are usually very, very careful about level matching when doing any sort of audio comparisons. Even when you know that louder can easily fool you into thinking something is better (which most listeners don't), you're still susceptible, if you don't counter act it. Wanna convince a recording artist in the studio it's great? Turn up those big speakers. Instant gratification.
When it comes to music consumption I like to think this is not really a problem: The sound of compression and distortion is the sound of current music and there is nothing inherently bad about it. Older generations will tend to oppose any new musical trend for various reasons, which all end up being subjective. The younger generations that grow up on this new sound do not care about brickwall limiting, because there is nothing to fucking care about.
Music production has been and forever will a mix of mostly people copying other people and flowing with the stylistic currents while adding a little something themselves. Sometimes something radical will happen. Mostly not. If you wanna stay relevant you go with the former and keep reaching for the later. Pretty much the same, as with coding or design.
Looking at you, Death Magnetic.....
In the 1990s, Grand Central was rewired, and everything except railroad traction power was converted to 60Hz. All conversion equipment was replaced with solid state gear. It took quite a while just to find everything that was powered off one of the nonstandard systems.
It wasn't until 2005 that the last 25Hz rotary converter was retired from the NYC subway system. (Third rail power is 600VDC, but subway power distribution was 13KV 25Hz 3-phase.)
The timezone database (maintained by the people who are very particular about making sure that a time specified is a well known time) have a note in the northamerica data file:
# From Paul Eggert (2016-08-20):
# In early February 1948, in response to California's electricity shortage,
# PG&E changed power frequency from 60 to 59.5 Hz during daylight hours,
# causing electric clocks to lose six minutes per day. (This did not change
# legal time, and is not part of the data here.) See:
# Ross SA. An energy crisis from the past: Northern California in 1948.
# Working Paper No. 8, Institute of Governmental Studies, UC Berkeley,
# 1973-11. http://escholarship.org/uc/item/8x22k30c
This made matters trickier after Fukushima, as the nation is effectively two smaller electricity grids, not one large one - so making up for the shortfall became harder than it could have been. (However, there's a massive frequency converter interface between the two grids.)
Edit: Aw, shucks - now that I revisit the article, I see the exact same points being made in that article's comment section. My bad.
My brother recently visited a hydro dam in northern Minnesota that had one turbine operating at 25hz even as recently as the 90s, serving at least one industrial customer still running equipment that predated the interconnected 60hz grid.
In 1919, more than two thirds of power generation in New York was 25 Hertz and it wasn't until as late as 1952 that Buffalo used more 60 Hertz power than 25 Hertz power. The last 25 Hertz generator at Niagara Falls was shut down in 2006.
Surely ordinary light bulbs don't care about the frequency. Do they mean the electronics for fluorescent lamps? Were those common in the 1940s?
PG&E (California' primary gas and electric utility) still has DC tariffs, thought I believe they provision it by installing a converter at the pint of use. I believe this is just for elevators.
Parts of Back Bay in Boston were still wired for 100V DC mains voltage into the 1960s
The article doesn't really go into how to make more performant art, and thats probably for the best - a lot of that will be game and engine dependent. It's really easy to 'optimize' in a way that hurts the engine (or for programmers to build/configure the engine and shaders in a way that is counter to the artists workflow or visual targets).
1. Are there controls (i.e. proof-of-stake) to enforce equitable and lasting storage of your items on others' machines?
2. What is the consensus model for marking peers as bad actors?
3. What are the redundancy guarantees? That is, how many nodes store my data?
4. What is the "currency" of sorts that I must "pay" in order to store a certain amount? Amount of hard disk I contribute back?
5. Why was the AGPL chosen? Surely adoption by any means, commercial or otherwise, would be welcome in a system that has equitable sharing guarantees. Now if I want to implement your spec in my choice license, I can't even read your reference implementation.
Maybe some fodder for the FAQ. If not answered later, I'll peruse the whitepaper.
On IPFS, something gets on your computer only if you decide to let it, and there are blacklists to automatically keep off material you don't want.
On ORC, it seems that encrypted pieces of everything get stored, so you can wind up with all sorts of things you don't want, but on the other hand might be able to deny legal responsibility.
From the whitepaper Abstract:
"A peer-to-peer cloud storage network implementing client-side encryption would allow users to transfer and share data without reliance on a third party storage provider. The removal of central controls would mitigate most traditional data failures and outages, as well as significantly increase security, privacy, and data control. Peer-to-peer networks are generally unfeasible for production storage systems, as data availability is a function of popularity, rather than utility. We propose a solution in the form of a challenge-response verification system coupled with direct payments. In this way we can periodically check data integrity, and offer rewards to peers maintaining data. We further propose that in order to secure such a system, participants must have complete anonymity in regard to both communication and payments."
The webpage also says, "Redundancy is achieved through the use of erasure codes so that your data can always be recovered even in the event of large network outages."
Does this means files can't be lost, as long as you keep paying your bill?
Mostly kidding, but really, this is pretty close to "Pied Piper" isn't it?
Does anyone know of someone doing the same style of introspection tools, for tracing and profiling and networking, like the body of her work, not just this post, but for Windows?
I know a few scattered posts here and there, usually PFEs at Microsoft Blogs scattered, but the landscape of dedicated bloggers seems lacking to a novice like me.
Don't get me wrong, I respect Julia Evans as a professional, but what she mostly does is simplify other people's hard work and in-depth analysis of difficult problems in various layers of the technology stack.
It's like saying you went to MIT (Minnesota Institute of Technology).
Other than this small nit, great article.
We also integrate it into a rather large wider toolkit called LISA  which can do things like describe synthetic workloads, run them on remote targets, collect traces and then parse them with TRAPpy to analyse and visualise the kernel behavior. We mainly use it for scheduler, cpufreq and thermal governor development. It also does some automated testing.
It seems like a webassembly for the kernel but local software has the benefits of knowing the platform it is running on. I.e. Why compile C code to eBPF, when I can just compile to native code directly?
I can potentially see it solving a permissions problem, where you want to give unprivileged users in a multi-tenant setup the ability to run hooks in the kernel. Is that actually a common use case? I don't think it is.