If there is one thing I truly love about Elixir, it is the easiness of getting started, while standing on the shoulders of a giant that is the Erlang VM. You can start by building a simple, not very demanding application with it, yet once you hit a large scale, there is plenty of battle-proven tools to save you massive headaches and costly rewrites.
Still, I feel, that using Elixir is, today, still a large bet. You need to convince your colleagues as much as your bosses / customers to take the risk. But you can rest assured it will not fail you as you need to push it to the next level.
Nothing comes for free, and at the right scale, even the Erlang VM is not a silver bullet and will require your engineering team to invest their talent, time and effort to fine tune it. Yet, once you dig deep enough into it, you'll find plenty of ways to solve your problem at a lower cost as compared to other solutions.
I see a bright future for Elixir, and a breath of fresh air for Erlang. It's such a great time to be alive!
My only concern is their long term viability and I don't just mean money wise. I'm concerned they'll have to sacrifice the user experience to either achieve sustainability or consent to a buyout by a larger company that only wants the users and brand. I hope I'm wrong, and I bought a year of Nitro to do my part.
It is probably not a solution for current Discord as they rely on linearizability, but I toyed with building an IRCd in Erlang years ago, and there we managed to avoid having a process per channel in the system via the above trick.
As for the "hoops you have to jump through", it is usually true in any language. When a system experiences pressure, how easy it is to deal with that pressure is usually what matters. Other languages are "phase shifts" and while certain things become simpler in that language, other things become much harder to pull off.
That's why I'd like to hear more about productivity and ease now. Is it faster and more fun to scale things in certain languages then others. Beam is modeled on actors, and offer no alternatives. Java offers all sorts of models, including actors, but if actors are the currently most fun and procudctive way to scale, that doesn't matter.
Anyways, learning how team scaled is interesting, but it's clear to me now languages aren't limiting factors to scale.
Also, we're looking for Erlang folks with payments experience.
> mochiglobal, a module that exploits a feature of the VM: if Erlang sees a function that always returns the same constant data, it puts that data into a read-only shared heap that processes can access without copying the data
There is a nice new OTP 20.0 optimization - now the value doesn't get copied even on message sends on the local node.
Jesper L. Andersen (jlouis) talked about it in his blog: https://medium.com/@jlouis666/an-erlang-otp-20-0-optimizatio...
> After some research we stumbled upon :ets.update_counter/4
Might not help in this case but 20.0 adds select_replace so can do a full on CAS (compare and exchange) pattern http://erlang.org/doc/man/ets.html#select_replace-2 . So something like acquiring a lock would be much easier to do.
> We found that the wall clock time of a single send/2 call could range from 30s to 70us due to Erlang de-scheduling the calling process.
There are few tricks the VM uses there and it's pretty configurable.
For example sending to a process with a long message queue will add a bit of a backpressure to the sender and un-schedule them.
There are tons of configuration settings for the scheduler. There is to bind scheduler to physical cores to reduce the chance of scheduler threads jumping around between cores: http://erlang.org/doc/man/erl.html#+sbt Sometimes it helps sometimes it doesn't.
Another general trick is to build the VM with the lcnt feature. This will add performance counters for locks / semaphores in the VM. So then can check for the hotspots and know where to optimize:
Hopefully more companies see success stories like this and take the plunge - I'm working on an Elixir project right now at my startup and am loving it.
In practice, Discord hasn't been completely reliable for my group. Lately messages have been dropping out or being sent multiple times. Voice gets messed up (robot voice) at least a couple times per week and we have to switch servers to make it work again. A few times a person's voice connection has stopped working completely for several minutes and there's nothing we can do about it.
I don't know if these problems have anything to do with the Elixir backend or the server.
Do you use an external system like zookeeper? Or do you have very reliable networking and consider netsplits a tolerable risk?
FastGlobal in particular looks like it nicely solves a problem I've manually had to work around in the past. I'll probably be pulling that into our codebase soon.
It was super promising 3 or so years ago. But I haven't seen an update.
Erlang is amazing in numerous ways but raw performance is not one of them. BEAMJIT is a project to address exactly that.
Also, I wish they had a ORM like Sequel. These two are really what is holding me back from going full in on Elixir. Anyone can care to comment on this?
I wonder how Cloud Haskell would fare in such a scenario
[Error 504 Gateway time-out]
only on Hacker News
Someone should really have told them about webassembly...
I admire the effort, but: doesn't "academic" mean "scientific"? Can there possibly be any "science" in having a well-known VM reimplemented in a well-known programming language?
* Object.values/Object.entries - https://github.com/tc39/proposal-object-values-entries
* String padding - https://github.com/tc39/proposal-string-pad-start-end
* Object.getOwnPropertyDescriptors - https://github.com/ljharb/proposal-object-getownpropertydesc...
* Trailing commas - https://github.com/tc39/proposal-trailing-function-commas
* Async functions - https://github.com/tc39/ecmascript-asyncawait
* Shared memory and atomics - https://github.com/tc39/ecmascript_sharedmem
The first five have been available via Babel and/or polyfills for ~18 months or so, so theyve been used for a while now.
What matters is the ongoing standardisation process. New JS features are proposed, then graduate through four stages. Once at stage four, they are "done" and guaranteed to be in the next annual ES edition write-up. Engines can confidently implement features as soon as they hit stage 4, which can happen at any time of year.
For example, async functions just missed the ES2016 boat. They reached stage 4 last July . So they're officially part of ES2017 but they've been "done" for almost a year, and landed in Chrome and Node stable quite a while ago.
It would be nice if EcmaScript could give us a middle ground -- ability to use 32/64 bit integers without having to go all the way down to asm.js or wasm.
Granted it may be limited to consumption via Emscripten, it is nevertheless now within the realm of possibility.
For this that cannot grok the gravity of this -- proper concurrent/parallel execution just got a lot closer for those targeting the browser.
Everything is supported, except "Shared memory and atomics"
And some interesting tweets by Kent C. Dodds: https://twitter.com/kentcdodds/status/880121426824630273
Edit: fixed KCD's name.Edit #2: No, really.
>BT: Assuming we get standard modules?
>AWB: We'll get them.
Am I the only one who finds this ironic..
What's the feature you're most excited about?
Standing Up to a Dangerous New Breed of Patent Trollhttps://blog.cloudflare.com/standing-up-to-a-dangerous-new-b...
Patent Troll Battle Update: Doubling Down on Project Jengohttps://blog.cloudflare.com/patent-troll-battle-update-doubl...
Also, the patent applies the same way to almost any proxy server (ICAP and similar https://en.wikipedia.org/wiki/Internet_Content_Adaptation_Pr...)
Encourage everyone to check with your firm's General Counsel about this. If you use Latham, or Kirkland or Weil, encourage your GC to reach out and make your views heard. It's despicable that these lawyers are harassing their firms' former and potential clients.
Let software work under trade secrets, but not patents. Anyone can implement something they think through. It's usually a clear example of a need. That said, I think the types of patent trolling law firms such as this deserve every bit of backlash against them that they get.
Long time ago certain Philadelphia area law firms decided to represent vegan protesters that created a major mess in a couple of high end restaurants.
A certain flamboyant owner of one the restaurants targeted decided to have a good time applying his version of asymmetric warfare. The next partners from those law firm showed up to wine and dine their clients in the establishment, the establishment(s) politely refused the service to the utter horror of the lawyers.
Needless to say, the foie gras won...
Why on earth aren't non-practicing entity patent lawsuits outlawed? Seems like a no-brainer, and I can't imagine these firms being big enough to have any seriously lobbying power.
If it's not illegal, more work needs to be done to make it illegal. Inventors always have avenues, moreso today than ever before.
The law patch that shuts down patent trolls will have no effect on software patents, and vice-versa.
That's not new. It's exactly what Intellectual Ventures was (or is?) doing.
I found my laptop to be very beneficial in my classroom learning during college, but only when I made it so. My secret was to avoid even connecting to the internet. I opened up a word processor, focused my eyes on the professor's slides or visual aids, and typed everything I saw, adding notes and annotations based on the professor's lecture.
This had the opposite effect of what this article describes: my focusing my distracted efforts on formatting the article and making my notes more coherent, I kept myself focused, and could much more easily engage with the class. Something about the menial task of taking the notes (which I found I rarely needed to review) prevented me from losing focus and wandering off to perform some unrelated activity.
I realize my experience is anecdotal, but then again, isn't everyone's? I think each student should evaluate their own style of learning, and decide how to best use the tools available to them. If the laptop is a distraction? Remove it! Goodness though, you're paying several hundred (/thousand) dollars per credit hour, best try to do everything you can to make that investment pay off.
The lecture format is what needs changing. You need a reason to go to class, and there was nothing worse than a professor showing slides from the pages of his own book (say) or droning through anything that could be Googled and read in less time. If there isnt some live demonstration, or lecture-only material, regular quizzes or other hook, you cant expect students to fully engage.
The next cut some students come to class, put a recorder on their desk and leave, then pick it up later.
Eventually there's a scene of the professor lecturing to a bunch of empty desks with just recorders.
And the final scene there's the professor's tape player playing to the student's recorders.
Abstract: Laptop computers are widely prevalent in university classrooms. Although laptops are a valuable tool, they offer access to a distracting temptation: the Internet. In the study reported here, we assessed the relationship between classroom performance and actual Internet usage for academic and nonacademic purposes. Students who were enrolled in an introductory psychology course logged into a proxy server that monitored their online activity during class. Past research relied on self-report, but the current methodology objectively measured time, frequency, and browsing history of participants Internet usage. In addition, we assessed whether intelligence, motivation, and interest in course material could account for the relationship between Internet use and performance. Our results showed that nonacademic Internet use was common among students who brought laptops to class and was inversely related to class performance. This relationship was upheld after we accounted for motivation, interest, and intelligence. Class- related Internet use was not associated with a benefit to classroom performance.
Thanks to bs articles like this that try to over generalize their results, I was unsure if I "needed" a laptop when returning to school.
Got a Surface Book and here's what I've experienced over the last 2 semesters.- Going paperless, I'm more organized than ever. I just need to make sure I bring my surface with me wherever I go and I'm good.
- Record lectures, tutorials, office hours, etc. Although I still take notes to keep myself focused, I can go back and review things with 100% accuracy thanks to this.
- Being at 2 places at once. ie: Make last minute changes before submitting an assignment for class A or attend review lecture to prepare for next week's quiz in class B? I can leave the surface in class B to record the lecture while I finish up the assignment for class A.
If you can't control yourself from browsing the internet during a lecture then the problem is not with your laptop...
The same is true for meetings at work. In a good session, people are using their laptops to look up contributing information. In a bad one... well... you know.
I took lots notes. Some people claim it's pointless and distracts from learning but for me the act of taking notes is what helped solidify the concepts a better. Heck due to my horrible handwriting I couldn't even read some of the notes later. But it was still worth it. Typing them out just wasn't the same.
Also re: other comments: A video lecture is to a physical lecture what a conference call is to a proper meeting. A professor rambling for 3h is still miles better than watching the same thing on YouTube. The same holds for tv versus watching a film on a movie screen.
Zero distractions and complete immersion. Maybe VR will allow it some day.
That being said, you can't control them; however, I like to look at different performance styles. What makes someone binge watch Netflix episodes but want to nod off during a lecture. Sure, one has less cognitive load, but replace Netflix binge with anything. People are willing to engage, as long as the medium is engaging (this doesn't mean easy or funny, simply engaging).
[Purely anecdotal opinion based discussion] This is one of the reasons I think flipping the classroom does work; they can't tune out. But, if its purely them doing work, what's your purpose there? To babysit? There needs to be a happy median between work and lecture.
I like to look at the class time in an episodic structure. Pick a show and you'll notice there's a pattern to how the shows work. By maintaining a consistency in the classroom, the students know what to expect.
To tie it back to the article, the laptop is a great tool to use when you need them to do something on the computer. However, they should be looking at you, and you should be drawing their attention. Otherwise, you're just reading your PowerPoint slides.
In college I slid into the pattern they saw here. I started spending more time on social media, paying less attention in class, slacking on my assignments. As my burnout increased the actual class times became less a thing I learned from and more just something I was required to sit in. One of my college classes literally just required me to show up. It was a was one of the few electives in the college for a large university. The students were frustrated they had to be there, and the teacher was tired of teaching to students who just didn't care.
Overall I left college burnt out and pissed at the whole experience. I went in wanting to learn it just didn't work out.
Are students taught how to take notes effectively (with laptops) early in their academic lives? Before we throw laptops out of classrooms, could we be improving the situation by putting students through a "How To Take Notes" course, with emphasis on effective laptopping?
It's akin to "how to listen to music" and "how to read a book" courses -- much to be gained IMO.
I wonder if that could be skewed, because it only takes one request to pull up a course syllabus, but if I have Facebook Messenger open in another tab, it could be receiving updates periodically, leading to more time recorded in this experiment.
The former was definitely the superior of the two options.
I have a feeling that people who aren't paying attention weren't going to anyhow.
However, I'd also guess that at least some people use the computer to look up additional information instead of stopping the class and asking, which helps everyone involved.
To avoid making purely meta comment, in my opinion the ship has already sailed; we are going to have computers in classrooms for better or worse. So the big question is how can we make the best use of that situation.
Some students are of course better with a laptop in the classroom
This is a potential methodological flaw. It takes me 5 minutes to log onto my university's VLE and download the course materials. I then read them offline. Likewise, taking notes in class happens offline.
Internet use does not reflect computer use.
However, a laptop is very useful to get work done during breaks or labs when you're actually supposed to use it.
The real solution is to engage students so they don't feel the urge to get distracted in the first place. Then you could give them completely unfiltered Internet and they would still be learning (perhaps even faster, using additional resources.) You can't substitute an urge to learn, no matter if you strap them to the chairs and pin their eyeballs open with their individual fingers strapped down, it won't do anything. It just makes school less interesting, less fun, and less appealing, which makes learning by extension less fun, less appealing, and less interesting.
Maybe the best way out of this mess is vouchers.
If the schools are functioning, it should be obvious to them that the laptops are not working out.
I never had any problems with math until I went to university, so I was merely a passive observer of everyday struggle for some people. I honestly believe that foundations are the key. Either you're taught to think critically, see patterns and focus on the train of thought, or you focus on numbers and memorization.
The latter obviously fails at some point, in many cases sufficiently late to make it really hard to go back and relearn everything.
Math is extremely hierarchical and I believe schools do not do enough to make sure students are on the same page. If we want to fix teaching math, I would start there, instead of working on motivation and general attitude. Those are consequences, not the reasons.
My wife went to school for Architecture, where she learned "basic" structural mechanics, and some Calculus, but still cannot explain to me in simple words what an integral or a derivative is. Not her fault at all: her Calculus professor had them calculate polynomial derivatives for 3 months, without ever making them understand the concept of "rate or change", or what "infinitesimal" means.
For me that's a big failure of our current "science" education system: too much focus on stupid application of equations and formulas, and too little focus on actually comprehending the abstract concepts behind them.
This may be grave heresy in the Temple of Tabula Rasa where most education policy is concocted, but nonetheless every teacher I ever knew was ultimately forced to chose between teaching real math class with a ~30% pass rate or a watered-down math Kabuki show with a pass rate just high enough to keep their admins' complaints to a low grumble.
In the end we teachers would all go about loudly professing to each other that "It's not about numbers, it's about learning how to think" in a desperate bid to quash our private suspicions that there's actually precious little that can be done to teach "how to think."
Source: I'm slow but good at Math and ended up dropping it as soon as I could because it would not get me the grades I needed to enter a top tier university.
It's about learning a set of thinking skills, not how to think. Many people who know no math can think and function very well in their domains and many people who know lots of math function and think poorly outside of math.
I think I often struggled or was intimidated by the syntax of math. I started web development after years of thinking I just wasn't a math person. When looking at this repo, I was surprised at how much more easily and naturally I was able to grasp concepts in code compared to being introduced to them in math classes.
The article brings out a good point about math anxiety. I have had to deal with it a lot in my years of teaching math. Sometimes my classroom has seemed so full of math anxiety that you could cut it with a butter knife. I read one comment that advocated starting our children out even earlier on learning these skills, but the truth is the root of math anxiety in most people lies in being forced to try to learn it at too early an age. Most children's brains are not cognitively developed enough in the early grades to learn the concepts we are pushing at them, so when a child finds failure at being asked to do something he/she is not capable of doing, anxiety results and eventually becomes habit, a part of their basic self-concept and personality. What we should instead do is delay starting school until age 8 or even 9. Some people don't develop cognitively until 12. Sweden recently raised their mandatory school age to 7 because of what the research has been telling us about this.
We don't communicate in Math jargon every day so it's ultimate a losing battle. We learn new concepts but we lose them since we don't use them. Additionally a large number of students get lost and frustrated and finally give up. Which ultimately makes math a poor method to teach thinking since only a few students can attain the ultimate benefits.
Yes, Math is important, and needs to be taught, but if we want to use it as away to learn how to think there are better methods. Programming is a great way. Students can learn it in one semester and can use it for life and can also expand on what they already know.
Also, exploring literature and discussing what the author tries to convey is a great way to learn how to think. All those hours in English class trying to interpret what the author meant was more about exploring your mind and your peer's thoughts than what the author actually meant. The author lost his sphere of influence once the book was publish. It's up to the readers of every generation to interpret the work. So literature is a very strong way to teach students how to think.
In other subjects you can rationalize to yourself in various ways: the teacher doesn't like me, or I got unlucky and they only asked the history questions I didn't know.
But with math, no rationalization is possible. There's no hope the teacher will go easy on you, or be happy that you got the gist of the solution.
Failure in math is often (but not always) a sign that education has failed in general. Teachers can be lazy or too nice and give good grades in art or history or reading to any student. But when the standardized math test comes around, there's no hiding from it (teacher or student).
From our experience most people struggle with math since they forgot/missed a curtain math skill they might have learned a year or two before. But most teaching methods only tell the students to practise more of the same. When looking at good tutors, we could see that a tutor observes a student and then teaches them the missing skill before they actually go to the problem the student wanted help with. That seems to be a usefull/working approach.
Abstract reasoning, intuition, and creativity, to me, represent the underpinnings of software engineering, and really, most engineering and science, but are taught more by osmosis along side the unintuitive often boring mechanics of subjects. The difference between a good engineer of any sort and one that 'just knows the formulas' is the ability to fluently manipulate and reason with symbols and effects that don't necessarily have any relation or simple metaphor in the tangible world. And taking it further, creativity and intuition beyond dull calculation are the crucial art behind choosing the right hypothesis to investigate. Essentially, learning to 'see' in this non-spacial space of relations.When I'm doing system engineering work, I don't think in terms of X Gb/s throughput and Y FLOPS... (until later at least) but in my mind I have a model of the information and data structures clicking and buzzing, like watching the gears of a clock, and I sort of visualize working with this, playing with changes. It wouldn't surprise me of most knowledge workers arrive have similar mental models of their own. But what I have observed is that people who have trouble with mathematics or coding aren't primed at all to 'see' abstractions in their minds eye. This skill takes years to cultivate, but, it seems that its cultivation is left entirely to chance by orthodox STEM education.
I was just thinking that this sort of thing could be approached a lot more deliberately and could yield very broad positive results in STEM teaching.
That can be useful of course (especially back then when we didn't carry computers in our pockets at all times) but I think it sends some pupils on a bad path with regards to mathematics.
Maths shouldn't be mainly about memorizing tables and "dumbly" applying algorithms without understanding what they mean. That's how you end up with kids who can answer "what's 36 divided by 4" but not "you have 36 candies that you want to split equally with 3 other people, how many candies do you end up with?"
And that goes beyond pure maths too. In physics if you pay attention to the relationship between the various units you probably won't have to memorize many equations, it'll just make sense. You'll also be much more likely to spot errors. "Wait, I want to compute a speed and I'm multiplying amperes and moles, does that really make sense?".
By that I don't mean it's easy. But when you're grappling with some problem, whatever it is, eg find some angle or integrate some function, if you don't find the answer, someone will show you, and you'll think "OMG why didn't I think of that?"
And you won't have any excuses for why you didn't think of it. Because math is a bunch of little logical steps. If you'd followed them, you'd have gotten everything right.
Which is a good reason to feel stupid.
But don't worry. There are things that mathematicians, real ones with PhDs, will discover in the future. By taking a number of little logical steps that haven't been taken yet. They could have gone that way towards the next big theorem, but they haven't done it yet for whatever reason (eg there's a LOT of connections to be made).
I like to think about math as language, rather than thought or logic or formulas or numbers. The Greek letters are part of that language, and part of why learning math is learning a completely foreign language, even though so many people who say they can't do math practice mathematical concepts without Greek letters. All of the math we do on computers, symbolic and numeric, analytic and approximations, can be done using a Turing machine that starts with only symbols and no built-in concept of a number.
It's impossible to tell if students are capable of thinking mathematically, however, because I have not met a single (non-mathlete) student who could give me the mathematical definition of... anything. How can we evaluate student's mathematical reasoning ability if they have zero mathematical objects about which to reason?
The ONLY sane way to answer these questions:- Does math increase critical thinking?- Does critical thinking lead to more career earnings/happiness/etc?- When does math education increase critical thinking most?- What kind of math education increases critical thinking?
Is with a large-scale research study that defines an objective way to measure critical thinking and controls for relevant variables.
Meaning you don't get an anecdotal opinion on the matter on your study-of-1 no-control-group no-objective-measure personal experience.
It took me reeeally long to grasp things like linear algebra and calculus and I never was any good at it.
It was a struggle to get my CS degree.
Funny thing is, I'm really good at the low level elementary school stuff so most people think I'm good at math...
Many instructors approach the subject with a very broad understanding of the subject, and it's very difficult (more difficult than math) to shake that understanding and abstract it to understandable chunks of knowledge or reasoning.
Contrast that to if I had learned programming instead. Programming definitely teaches you how to think, but it also has immense value and definite real-world application.
I always tell people programming and syntax are easy - it's learning to think in a systems and design mindset that is the hard part.
It just bugs me sometimes when people make hyperbolic statements like that. I remember coworkers saying things like "software consulting isn't about programming". Yes it is! The primary skill involved is programming, even programming is not the ONLY required skill.
I was an Applied Math major at Berkely. Why?
When I was in 7th grade, I had an old school Russian math teacher. She was tough, not one for niceties, but extremely fair.
One day, being the typical smart ass that I was, I said, why the hell do I need to do this, I have 0 interest in Geometry.
Her answer completely changed my outlook and eventually was the reason why I took extensive math in HS and majored in math in college.
Instead of dismissing me, instead of just telling me to shut up and sit down, she explained things to me very calmly.
She said doing math beyond improving your math skills improves your reasoning ability. It's a workout for your brain and helps develop your logical thinking. Studying it now at a young age will help it become part of your intuition so that in the future you can reason about complex topics that require more than a moment's thoughts.
She really reached me on that day, took me a while to realize it. Wish I could have said thank you.
Wherever you are Ms. Zavesova, thank you.
Other beneits: doing hard math really builds up your tolerance for building hard problems. Reasoning through long problems, trying and failing, really requires a certain kind of stamina. My major definitely gave me this. I am a product manager now and while I don't code, I have an extremely easy time working with engineers to get stuff done.
With the sole exception of Geometry, every single math class I took in middle and high school was an absolutely miserable time of rote memorization and soul-crushing "do this same problem 100 times" busy work. Geometry, meanwhile, taught me about proofs and theorems v. postulates and actually using logical reasoning. Unsurprisingly, Geometry was the one and only math class I ever actually enjoyed.
I have thought about changing my major to pure mathematics too.
In our lowest level course we teach beginning algebra. Almost everyone has an intuition that 2x + 3x should be 5x. It's very difficult to get them to understand that there is a rule for this that makes sense. And that it is the application of this rule that allows you to conclude that 2x + 3x is 5x. Furthermore, and here is the difficulty, that same rule is why 3x + a x is (3+a)x.
I believe that for most people mathematics is just brainwashing via familiarity. Most people end up understanding math by collecting knowledge about problem types, tricks, and becoming situationally aware. Very few people actually discover a problem type on their own. Very few people are willing, or have been trained to be willing, to really contemplate a new problem type or situation.
Math education in its practice has nothing to do with learning how to think. At least in my experience and as I understand what it means to learn how to think.
At the time WYSIWYG was a bullet point promise that never delivered. Seeing it actually happen was amazing. That it was in a product that had the feel of "Of course it does it that way, because that's how it should be done"
I often lament that it was never an Open Source project. It got passed around companies looking to use it in some niche or other while it slowly decayed. It had enough enthusiasts that as an open project it would have developed.
Edit: This was in 1997. We were writing a web browser for the Brother Geobook which was a device with late 90s PDA capabilities in the form factor of a late 90s laptop. I don't think that it was a particularly successful product.
It was a pretty amazing piece of code that made that 8086 very usable for a few more years.
am I the only one that made a double-take at this? I don't associate OOP with being lightweight. It's either oxymoronic or irrelevant.
You can get many of the benefits of GraphQL using postgrest's resource embedding:
We're using it in production.
PS: To be clear, you can't expose it directly to your users. We wrap it in a proxy service that provides authentication and authorization, and parses and transforms the users' URL queries destined for PostgREST. We also apply some transformations to the data coming back from PostgREST, such as encoding our internal UUIDs. It may sound complicated, but it's actually only about 200 lines of Erlang.
However, 99% of the tutorials on graphql , this one included, fail to show a real life use case. What I mean by that is a working Example of a SQL database from start to finish.
So this tutorial was very cool, but not very useful. Just like the rest of them.
I've yet to find a recent tutorial that covers full stack node.js + PostgreSQL/MySQL + whatever front end. It's always MongoDB or only covers the concepts of GraphQL.
We're super excited to finally launch this resource that we've worked on together with amazing members of the GraphQL community! The goal of How to GraphQL is to provide an entry-point for all developers to get started with GraphQL - no matter what their background is.
The whole site is open-source and completely free to use! If you want to contribute a tutorial because your favorite language is still missing, please get in touch with us!
Here's the official announcement blog post on the Graphcool blog: https://www.graph.cool/blog/2017-07-11-howtographql-xaixed1a...
If you find a bug or another problem, create an issue or submit a PR on the GitHub repo: https://github.com/howtographql/howtographql
Follow us on Twitter to be informed about new content that's added to the site: https://twitter.com/graphcool
Instead of allPersons, I think it would be cleaner and easier to understand as
Which makes the generic nature of "all" explicit.
The best thing about GraphQL is having a standard interface for queries (and more) and all tools that built upon it (such as Apollo). To name a few existing and upcoming (?) features from Apollo: Query batching, real-time updates (WebSockets + subscriptions), caching, optimistic UI, polling, pagination, live queries and many more.
Also, GraphiQL is pretty cool, too, basically Swagger for free.
Is there a library that, say for Golang, helps translate a GraphQL Query into SQL statements to actually get the data?
Save requests and bandwidth, no need for explicit version control, and to top it off an easy to use syntax? Sign me up!
I'll definitely be using this in future projects.
My background with backend is mostly PHP. Any good plans on adding PHP guide to backend section or is there no good GraphQL-server/implementation for PHP?
Also, I have been working as an Android developer for the last couple of years, and I was wondering how similar are the React and Android implementations of Apollo.
They seem to be trying to cover all bases with the Google Brain Residency the Machine Learning Ninja program, standard VC funding, and now this. If you have talent in ML/AI there is a way Google can help you succeed in the style of your choice. Want to be a founder? Excellent! Want to be a founder but also kinda part of Google? Sure! Are you super talented and experienced in other disciplines and want to explore AI and maybe contribute a 2-5% improvement to one of our model's performance? Yes! We have that!
> We can help you find and incorporate data sets into your first models. From cleaning data to extracting the most important features, our team can help you get your production models to market.
While realizing the hardest part of a startup is everything but the tech, it seems odd they're telling AI companies they'll help with the hardest parts of the technical side, the ones that need to be done right well before anyone can tell if your tech has any merit.
I'd hate to be a first-pass reviewer for all the pitches they're gonna get. "I have this amazing idea, I just need someone else to build the AI behind it!"
Thought about searching to find out how hackable the Amazon version is... Then decided I have better things to do.
Not so great for much else.
The real trick is that they manage to get people to pay them anything for these little trojan horses.
Personally, I have no use for one. "Alexa turn off the lights", or just get up or use your phone? That's what I do at least.
...Are there holes in those "tilings", or are the tiles not all the same shape, or am I misunderstanding what non-periodic means in this context?
And what is the name for those types of "self-surrounding" tiles on the cover:
they're repeatedly dipping it, and using the volume displacement to reconstruct the shape. Amazing. The site is hammered right now so I can't get more details: anyone see how many dips are required to get the highest-detail models they show on the landing page?
Not sure how practical it is right now, but I wonder if you could do this with air volume at a high enough delta measurement resolution you might get some amazing results.
e.g, some of the water will stick to the sides of the object.
1. Does not require rotation of the DUT, but instead uses just rising fluid level.
2. Uses permeable fluid so it achieves full density scans.
He spent a number of years trying to get the product to market as a startup, but ran out of personal funding.
He believes Archimedes may have used the Roman dodecahedron as a fluid scanner to test the quality of their projectiles to improve accuracy.
How do they handle overhangs that trap bubbles?
Maybe shaking and scanning in reverse? (can stall cause weird effects when the air can't get back in, but should be more detectable.
It would be useful for the founder community (especially outside YC network) to have examples of how different recent startups have done it - offered discount or cap or both, how they determined the cap, the experience at A, experience with SAFE when dealing with angels/micro VCs, etc.
Any founder willing to share that here?
Serious question - how is this still the case, or make any sense? Wouldn't it be in the investor's interest that the company doesn't spend $60K out of their raise on this, and instead on hires, product, etc?
And given how standard a process this must be for every VC firm, I imagine they would have a well-negotiated rate, which for them is an incremental cost of investing?
I'd like to believe that there are firms out there that don't do this, and that this is turns out to be some sort of advantage for them (a form of founder-friendly/company-friendly, if you will).
Safes are great. They're easy to understand, and any semi quantitative founders should be able to build dilution spreadsheets without the help of a lawyer.
I actually tried to get some lawyers here in Australia to convert over the YC SAFE agreements to Australian law and I could not find one. Apparently the big blocking point is none of the law firms wanted to take responsibility for the legal liability. This is one area where our "innovation" government could get involved to sort out.
I won't make one anymore.
I've done two deals that involved a SAFE, and it's been almost 2 years and the companies are still looking to raise a round. If they do, I'm looking at a 10-20% return.
It's not worth it for the risk.
Indeed, at least convertible notes are debt and can be seen you're in a liquidation. A SAFE doesn't even give you that.
To the extent that founders suffer too much dilution while raising via a SAFE, it's more likely the case that there was something not working with the business.
Sure, if the founders could have raised a priced equity round from the get-go, they probably should have done that over a SAFE, but more likely the legal expenses would have been too onerous for that to have been an option...
In general, the problem is that most benefits that investors enjoy are properties of their shares rather than the money that they invested. For an equity round this is one and the same. Not so much for convertible notes. A simple example:
An entrepreneur raised a $1M convertible note with a $5M cap. Ignore discount, interest and other factors for now. She then raises a $5M round at a valuation of $20M. That yields a dilution of 20% for the round plus a "hidden" dilution of ~17% for the note conversion (1/6). That's the blurry issue that both authors discuss. But if anything the share rights are even blurrier. Let's say that the equity round came with what is commonly referred to as a 1x liquidation preference (non-participating). So they would get $5M back before other shareholders get anything. Even though I just worded that as matching the money that they put in, it is generally a property of the share class that the investors hold. For example, their $5M might have bought 5M shares at $1/share that each says "redeemable for $1 or convertible to common shares". Our note investors also hold those shares now. But instead holding one per dollar, they now hold four per dollar (since they pay 1/4 the price for such a share). Suddenly, they have effectively a 4x liquidation preference benefit and the company has to return a full $9M before common shareholders/founders see a penny of payout (despite only having $6M in the bank).
Interest rates, pre-round ESOP increases, and many other factors in convertible notes make this problem worse. And it affects just about all aspects of the cap table including voting rights, protective provisions, redemption rights, etc.. Basically, the bigger the gap between the cap and the eventual round, the bigger the privilege the note investors pick up. Not just in economic benefit where you would expect it, but also in power/insurance/protections/etc. where it isn't obvious at all. Nowhere in your term sheet for the note or equity round will it mention 4x liquidation preference. Doing so would cause instant rejection of the deal by even the most inexperienced founder! But that's exactly would is going to happen once all the conversion mechanics are executed. And that can catch even seasoned entrepreneurs off guard (and seasoned investors, including plenty of note holders who never understood that they would get these benefits).
Convertible notes - SAFE or otherwise - have a role to play in venture financing. But they are complex instruments and should be use carefully. Anything else is just a recipe for pain in the long run.
A SAFE _is_ a convertible note, with standardized language.
I first wrote a note, but opted to deliver the message in person as they were home [also handed them the note at the end of it as it had my phone number and email on it should they need me]. At the end of the day, i think I've turned what was potentially the beginning of a bad situation into something that brought us closer together and more likely to communicate effectively in the future.
Edit: One thing some people are saying further down is regarding admissions of guilt and i forgot to touch on that but it's clearly what I did here...
I think we live in a society that is often scared that doing the right thing [often in the form of apologizing] will get us in trouble. Plenty of times this holds true, but i think if we all did it more often it might be for the greater good, plus sometimes getting in trouble teaches us a valuable lesson. Doing the right thing should be your priority, but when you mess up I feel it's very important to correct it or you will often suffer small but longer lasting side-effects [stress, bad relationships, etc].
If you practice being comfortable saying something like, "I'm sorry, I overlooked X. I'm fixing it now, and will do my best to make sure it doesn't happen again" then your professional and personal life may be just a tad smoother. All of this implies, and requires, a mindset that is self-aware enough to evaluate itself. That can be the difficult part, that requires regular practice and resistance of instincts to go on the defensive and justify something, even if it was "right" at the time, if circumstances now show otherwise.
2. This document conflates negligent/avoidable mistakes with situations where "there has been an unintended or unexpected event" and "includes recognised complications referred to in the consent process". In the latter cases, "I'm sorry" is an expression of sympathy, not a true apology.
3. These sort of apologies are required by statute. How meaningful can they be?
tl;dr apologies are usually a good thing in your personal life and can prevent litigation in legal matters and are rarely admissible as proof of guilt. I am not a lawyer.
Seriously though, I find myself apologizing profusely all of the time in ordinary conversation. Ordinarily I would think that it's a habit I need to curb, but the sorries are all genuine!
There is a chart with "Do say" and "Don't say" listening appropriate and inappropriate phrasing examples.
I will ask an extreme question about it then make it a bit less reductionist.
The chart doesn't say whether "I'm sorry I caused your son's death" falls under "Do say" or "Don't say".
The thing is, although the above line exaggerates, if you are being transparent then there are a LOT of statements that reduce to "I (we) caused your son's death", but which are much more technical, i.e. regarding what was done.
In this case it is unclear whether these are to be avoided or can be mentioned? It says "These steps include informing people about the incident" but it is not totally clear whether they mean it.
Malpractice is the third-leading cause of death in the United States, so my question isn't an idle one.
The PDF could be far more specific here.
This made me download Qubes. Amazing project that seems to care.
* Let AMD know that open-sourcing/disabling PSP is important to you .
* Contribute to RISC-V. You can buy a RISC-V SoC today . Does your favorite compiler have a RISC-V backend?
 https://www.reddit.com/r/linux/comments/5xvn4i/update_corebo... https://www.sifive.com/products/hifive1/
I recently flashed coreboot on my X220 (and it worked surprisingly enough). However, I couldn't find any solid guides on how to set up TianoCore (UEFI) as a payload -- does Qubes require Trusted Boot to be supported on their platforms (I would hope so)? And if so, is there any documentation on how to set up TianoCore as a payload (the documentation is _sparse_ at best, with weird references to VBOOT2 and U-Boot)?
Otherwise I'm not sure how a vendor could fulfill both sets of requirements.
What could one do to make it possible to have ME-less x86 in the future?
A dozen companies with 1000 employees each and a budget of $2,500 per employee gets you $30 million, which is surely enough to get a decent, qubes-secure laptop with no ME. You aren't going to be designing your own chips at that point, but you could grab power8 or sparc or arm.
Are there companies that would reasonably be willing to throw in a few million to fund a secure laptop? I imagine at least a few. And maybe we could get a Google or someone to put in $10m plus.
This is one of the most important points. The speed at which laptop vendors are releasing new SKUs is staggering. I know the whole supply chain is to blame, but apart from a few models, the number of different SKUs is way too high.
As for the article, it was distracting by the poor writing style (emphasis mine). It's frustrating to hear people say "like" and "literally" all the time in speech, but far worse to see it in written pieces:
- "...point source (literally, a dot of light)"
- "...galaxy, a star, you, me bends space, literally warps it"
- "...too faint to see. Like, hundreds of times too faint"
Yet the "farthest star" article says the farthest star is only 9 billion light years away, which is ten times closer than the diameter of the observable universe.
 - https://www.youtube.com/watch?v=4S69zZwYrx0
 - https://en.wikipedia.org/wiki/Observable_universe
At a glance, this seems like the funds are no-strings-attached. But when you think for a minute, you realize it's the exact opposite.
Google is saying that if they don't like what you do with the money, they won't give you any more but if they do like what you do with it then you might get more. This incentivizes the professor to use the money to do things that Google would like, which is the opposite of no-strings-attached.
There technically are no strings attached to this money, but the possibility of future payments (which ranged from $5k to $400k) is a pretty big enticement.
Long read but enjoyable and informative.
: The Most Important Scientist You've Never Heard Of: http://mentalfloss.com/article/94569/clair-patterson-scienti...
Why is this even news? Is there a single for-profit company that funds research contrary to the company's interests?
If you're looking for arguments for antitrust in this area beyond consumer welfare you've found them. The concentrated wealth produced by big monopolistic firms has a gravity field of it's own, distorting public information and opinion.
And all those graphs showing how big Google is have nothing to do with the story. News Corp wants an anti-trust investigation into Google in the US too.
Glitch (the hosting service) is returning 504 errors, but it's easy to run it locally:
cd /tmp git clone https://github.com/fiatjaf/node-dependencies-view.git cd node-dependencies-view/ npm install npm start
* In repos containing many modules, the svg is really too wide, even when decreasing ratio to 0.1. A more space-efficient layout could possibly be found. Vertical instead of horizontal, perhaps?
* Fails to render anything whenever a module is not found (e.g. `require('./params')`). Proper fallback may be implemented.
Interesting point that at some point your observing apparatus gets good enough that you can 'see' the structures built by sufficiently advanced civilizations (sure they cloak their ships in orbit but you can see how they make their home world comfy!)
At one of the SETI seminars there was a discussion about when would be the "right" time to alert a newly discovered intelligent species that they aren't alone in the universe. There was a lot of back and forth about indigenous tribes in the Amazon, some of who learned of other tribes by the arrival of missionaries, some by loggers, and some who were out walkabout and came upon the strangers. How you meet outsiders has a different impact on how it affects you.
So if you were aliens and you didn't want to 'alarm' or 'damage' humans, what would you use as a signal that it was probably a good time to say "Hello" ? I've always felt that once you could detect they were having conversations on other planets you would now "know" we weren't alone and someone could appear in orbit and say Hi. Others felt it would only be safe if humans felt reasonably confident in their own ability to meet them at their level (so perhaps at least colonies on other solar system bodies). One person at our table was firmly in the only when it is unavoidable, which is to say they are about to send a probe to an inhabited planet or come across a construction like a station that is not easily concealed or moved.
I still estimate that by the time a civilization has planetary-scale engineering capability, that they won't need to make things like starshades.
If you have molecular nanotechnology, you can either adapt yourselves to whatever location you find, or just skip the biological body business, and directly upload your consciousness to a computer network.
The 2nd option is far more mass and energy efficient to support large numbers of sophonts, and I expect that any civilization to endure long enough will have the majority of its population living online instead of offline. If that even ends up being a thing, and they all don't just merge into a single entity (going in the direction of Star Trek's borg).
Maybe we should be building ourselves one of these...
You'd think that the aliens might not be too keen to create huge beacon advertising their presence to all and sundry...
Maybe because by the time a planet is able to send such signals, they have discovered better means, faster than light speed, communications.
So, the "Hello" communications have been coming to us for maybe millions of years, but like our planet before we understood radio, we can't detect their communications.
The build system also geared for "any developer, any platform" with support for Xcode, Android, iOS, Java, .NET, and other types of applications using our Windows or multi-platform agent.
We also do unlimited private repo's for those 5 users which I know is super important to people.
Details on VSTS:https://www.visualstudio.com/team-services/
From $7,300/person, but you'll need to wait until next year since this year's kicked off today.
What's the speculation as to their surprising abundance and uniformity?
Helicopters can't fly over itthe downward force of the air would pull them in.
They can prop up Stalinism regimes.They can prop up apartheid.They can prop up the likes of Mobutu Sese Seko.
Of course that gets into difficult judgement calls, whereas this pop-under case is pretty clear cut.
It makes sense if they want to move slowly and deliberately, but I hope they won't stop here.
Thanks, ad blocking.
EDIT: ...because the policy punishes advertisers use of other advertising providers.
See discussion here:https://news.ycombinator.com/item?id=14712576
Or acept() incoming connections, and then pass the connection's file descriptor.
Wouldn't really work for Internet accessible IPv4 addresses, but IPv6 would be fine.
First, it presumes a 19th-century separation of "capital" and "labor" where "capital" is a bunch of greedy pigs trying their damndest to exploit labor, with little crossover between the two groups. The modern reality is way more complicated. Almost every member of "labor" has some form of pension, 401(k), IRA, or personal stock holding, and even if they don't, their governments do. Huge pension funds like CalPERS are heavily invested in the stock market, which matters because (a) many state employees rely on them for income, and even if you don't work for the state, (b) your taxes are directly tied to the investment performance of these funds. Bottom line, it's complete folly to suggest the stock market is a "rich person's problem" even if you're poor. Anyone invested in the S&P 500 is going to have a large position (relatively) in Apple.
Second, this article makes no mention of Google's hiring of Ruth Porat or the recent moves to put better capital allocation processes in place. I, for one, wish Google would behave more like Apple. I think it shows admirable restraint that Apple can pay so much cash out without wasting it on dumb things.
Third, it's just a sloppy article in general. They make no mention of whether the "performance" of the two includes the cash thrown off by dividends, which in Apple's case, is significant. They also didn't mention the complex back-story of why the Irish subsidiary is used , nor any of the academic finance research suggesting that "Short-term" decision making actually benefits investors long-term.
More specific to Apple's case, the hoarding of cash overseas to avoid paying US taxes on it is one of many poisonous symptoms of our international capitalism. The amount of tax Apple would pay if those profits did come to the US would make a not-inconsequential dent in the federal deficit. The whole take-on-debt-to-pay-dividends strategy is so skeevy that while I don't doubt it's legal, it's very questionably ethical.
All that said, I think US tax policy contributes to the problem. During the Bush administration, an effort was made to argue that taxes on dividends amounted to double taxation because the corporation had already paid taxes on that money, so why should the investors also pay tax on it. And while dividend income was not made tax free, it is now (or at least was for a while, I haven't kept up) taxed at a significantly lower rate than "earned" income. But the fix that makes the most sense to me, and which would solve Apple's problem, is to exempt corporations from paying taxes on the money they then pay out as dividends, and tax the individuals earning the dividends their normal marginal tax rate. This would encourage corporations to pay more dividends, end the "double" taxation, solve some percentage of off-shore hoarding, increase US government revenue, and put more money into the economy and not in corporate bank accounts.
Even in such a vague form, this is a touch sensationalist. The impact of 'the one that wins' may be negligible. There also may be no winner.
A very silly article comparing two companies with similar money-management history and acting like some competition exists between the models because investors recently got an upper hand with Apple. They wouldn't have gotten their market position doing this. It strikes me like comparing two athletes when one recently acquired a disability and suggesting that the disability contributed to their historical success and suggests some new model for the sport going forward.
In the end the economist unwillingly reveals that the actual ramifications are highly theoretical (bordering on non-sense) and and leaves the reader only to conclude that this won't as stated "decide the future of capitalism". In discussing the future economy I would be much more inclined to ask: "How do we create a model where wealth and power is distributed broadly across society?", "What constitute infrastructure in a modern economy?", "How do know we aren't underperforming?" and so on.
Apple had $2.33 EPS for Q2 2015 and for Q2 2017 reported $1.90 EPS. So declined over the last 2 years.
Google other revenues (non ad) were over $10B for 2016 and growing at 50%. Apple total revenues in 2004 when it was 28 years old were less than just Google other. Just sayin.
Btw, Google holds the record of getting to $100B cap faster than any other. Even accounting for inflation.
I thought this would be about the way each interacted with customers, which is far more interesting.
There's absolutely no reason to seek growth based returns which carry risk while this approach is available. Dividends and stock-buybacks represent a no-value-created system of incentives for the richest people in the world, directly extracting the surplus value of laborers at the expense of workers and long-term investors. Only when a company starts to topple does there seem to be any interest in moving into new markets or improving their existing lines of business.
Apple is way older than Google, maybe this has an influence in both diff. approaches...
- by building a payment service that's way cheaper and easier than e.g. Western Union or banks
- by using literally every last cent in his pockets to make SpaceX work (which benefits the whole world in terms of cheap, reliable, environmentally-friendly, russian-free launches as well as the planned Mars colony)
- by launching the maybe most successful pure electric car, which soon goes into mass-market price range
This is what I as a socialist see as the one example of capitalism that actually WORKS.
not meant to be racist at all, but political - given the obvious tensions between Russia and the Western world, e.g. by Russian meddling in US elections, financing at least the German and French neonazi parties and invading Ukraine, it's simply unacceptable to depend on Russia honoring their rocket engine delivery contracts.
You still owe me back book XY. Type "snooze" to be reminded again in 7 days. Or be reminded again tomorrow. Type "done" if you got this reminder despite having done it (same as "snooze", but I'll manually review if I should remove the reminder)
except for the ones that are not implemented. would be interested to try this out once it has telegram support.
I clicked on product tour, entered a name, and then got this ridiculousness:
Me: Johnny Cash
Bot: That's interesting
Bot: Ready for an adventure?
Bot: Oh no, why? It will be fun!
Bot: Are you sure? :(
Me: (pre-scripted) ok, I can try
Bot: Great, so let's try again.
Bot: Do you want to do something relaxing or should we go crazy?
That's where I quit. Unintentionally creepy bot is slightly creepy. Am I talking to a serious product bot, or is this get trashed and sleep on my couch party bot? Wild and crazy times ahead.
Potential customers should be able to dive right into a conversation with your bot tech. You should be extremely eager to show me what it can do in a live conversation and you should have stellar pre-built examples for that purpose all available from one click on the home page. My take away from my experience, is your bot tech can't do much so you're not immediately getting into showing off its capabilities (I don't know if that's the case or not, but if this were any other site, that would be my take away from it, and I'd never return).
All in all, whhen you have a server that seems so close in performance to Intel for less money and consuming less power, I can't imagine that EPYC won't see broad adoption and Intel won't be squeezed.
I'm glad AMD is back and there is renewed competition in the server market!
1. The mesh interconnect looks like a big loser for the smaller parts. It's a big jump up in complexity (there's an academic paper floating around which describes the guts of an early-stage version) and seems to be a power and performance drain. I can't imagine they got the clock speeds they wanted out of it. Sure, it's probably necessary for the high-core-count SKUs, but the ring bus probably would have done a lot better for the smaller ones.
2. There's almost nothing in here for high-end workstations (which typically have launched with the server parts). Sure, AMD has Threadripper coming soon, but this looks like Intel's full lineup... so where are the parts? We've bought plenty of Xeon E5-1650s and 1660s around here, and it doesn't look like there's anything here to replace them. That's unexpected. The "Gold 5122" (ugh what a silly name) is comparable, but at $1221 is priced just about double what an E5-1650v4 runs.
Workstations are a bit of an interesting case because their loads look a lot more like a "gaming desktop" than a server: a few cores loaded most of the time with occasional bursts of high-thread-count loads. That typically favors big caches, fewer cores, and aggressive clock boosting. If you're only running max thread count every now and then, you can afford a huge frequency hit when you do. But since these are business systems we try to avoid anything that doesn't say "Xeon" on it (or "Opteron", in years past) as reliability is paramount. To see nothing here from Intel in this launch is discouraging, to say the least. I have an upgrade budget and it looks like it'll be heading nVidia's way at this point.
<disclaimer, I work for DO>
"What does this mean to the end user? The 64 MB L3 on the spec sheet does not really exist. In fact even the 16 MB L3 on a single Zeppelin die consists of two 8 MB L3-caches. There is no cache that truly functions as single, unified L3-cache on the MCM; instead there are eight separate 8 MB L3-caches."
"AMD's unloaded latency is very competitive under 8 MB, and is a vast improvement over previous AMD server CPUs. Unfortunately, accessing more 8 MB incurs worse latency than a Broadwell core accessing DRAM. Due to the slow L3-cache access, AMD's DRAM access is also the slowest. The importance of unloaded DRAM latency should of course not be exaggerated: in most applications most of the loads are done in the caches. Still, it is bad news for applications with pointer chasing or other latency-sensitive operations."
I was kind of expecting this, but it's still disappointing to see. Looks like if you need a lot of L3, Intel is still the best/only option. Not to say that AMD hasn't made massive improvements though - and it's also worth noting that while AMD's memory latency is generally worse, throughput is also typically better than Intel.
> This pricing seems crazy, but it is worth pointing out a couple of things. The companies that buy these parts, namely the big HPC clients, do not pay these prices.
We in HPC would not touch these outside big memory systems which is even niche for us. The consumers of these are far more likely to be those with data warehouse style needs (a.k.a Oracle customers).
Much like the rest of the world 2 socket systems in HPC are by far the most common.
Wondering other than some sparse matrix applications known to be memory bandwidth bound, what kind of performance impact this is going to cause. Is there any real memory bandwidth bound applications other than ML/AI stuff used by those Internet big names?
Looks pretty solid. Sure not everything scales linearly with corecount but if your task does, it looks like AMD might be worth considering.