I started getting fascinated about computer architecture a while back, but then I saw how dead embedded programming was in my area.
I enjoyed the read!
Again, it's such a small thing, but I (and I guess other people do as well) tend to focus on this kind of graphical details. I wonder if it's easy to fix...
In Flutter, everything is a nested pile of objects with too many APIs to keep track of. Take this example: https://github.com/flutter/flutter/blob/master/examples/stoc...
Why do I need to care if something takes a `child: (single object)` argument or a `children: [LIST of objects]`?
Flutter would be better with JSX: JSX hides how the puzzle pieces fit together. I don't care if it takes a child or children; just make everything connect the same way.
React Native's Flexbox also beats how Flutter did things. Why do I need to memorize which objects take which styling arguments? You want to center items on the screen? Re-nest everything inside a Center object! You want a column or a row of elements? Use a Column/Row object!
For a framework that's trying to bill itself as a great tool for prototyping, it feels like I'm sifting through a mountain of minutiae. I was able to guess my way through a React Native app and be right 99% of the time. With flutter, my luckiest guess would lead me to an abstract base class... Then I'd have to dig around to figure out what the hell I need to use to make a view scrollable. Seriously:
- High perf Skia based UI for pixel perfect rendering and high-level Material design widgets - React inspired, productive development model - Fast dev/iteration cycles with hot reloading - Productive and high-performance Dart language, natively compiled (AOT on iOS) - Enable native interop with underlying iOS/Android APIs - Actively developed by Google
Unfortunately I've run into a few issues with React Native that I've had to workaround which I've submitted repros to months ago but received no response from the React Native team except in the last couple of days where they've closed it without even looking at it because it didn't receive comments/activity from other devs. In the last 2 days React Native has closed 773 other issues because they consider it low priority:
This gives me low confidence that React Native will be a high quality platform with current low-priority issues lingering indefinitely so I welcome competition from Google with Flutter and will be anxiously looking forward to trying it out when it gets out of alpha.
Hot-reloading and good performance are very attractive parts of Flutter, but they really should have reconsidered the decision to make their own UI widgets. When you use the native UI elements, you get that native look-and-feel for free, and you don't have to dump man hours into replicating that behavior. They could have a "UI backend" which calls out to the native UI elements for each platform. The great thing is that since they use these UI widgets natively on Fuchsia, they can use their existing code as just another backend on that platform without having to throw the work away.
My favorite thing about Flutter is that it looks like they took some heavy inspiration from React. If anyone reading this isn't familiarized with React, or they don't really "get it", I'd highly suggest reading the Removing User Interface Complexity, or Why React is Awesome .
One of the big problems with implementing a UI toolkit is having to re-implement everything relating to accessibility. Although it's totally understandable that they're still focusing on the core.
Something I'd be interested in seeing is how Dart and Flutter might affect battery life. I'd expect the stock UIs to be pretty well optimized by now, but I have no idea how Dart stacks up in performance.
Count me a skeptic. This is the same approach taken by Java's Swing (now JavaFX) toolkit and apparently it has exactly the same issues. Swing never felt quite right even after decades of tweaking.
I wonder how stuff like navigation is built? If that's all in dart I'd be interested in seeing how the back stack looks in the hierarchy explorer. I.e are precious screens rendered still.
As others note the UI model seems Reacty--you write "builder" methods that recreate a widget tree when things change, and something behind the scenes sorts out an efficient way to update the screen with just what really changed. I'm not hugely worried about performance: your UI rebuilds should be separated from your animations, and anyway, building your virtual widget hierarchy ideally shouldn't be too CPU intensive in the first place.
Hot reload is pretty great. I can't actually compare with "real Android" dev, but changes to my little app showed up in under a second in an Android phone, emulated or real. There were a couple surprising things about the basic libs, e.g. Flutter master only recently added a convenience object to bundle together a radio/checkbox and its associated label-stuff (RadioListTile).
The Flutter Gallery app is available on Google Play and its source is in the Flutter git tree. You can see a lot of the Material widgets implemented, including rich list types (e.g. tiles w/photos), pull-from-the-side drawers, top-of-screen tabs you can swipe through, bottom-of-the-screen nav bars etc. Even on iOS Google seems to follow Material guidelines a lot (or at least, the Daring Fireball guy complained that they do; I don't have iOS to check), so maybe it's the easiest fit if you're prepared to do the same. Someone who works on Flutter mentioned elsewhere in these comments that they're working on components that look more like the iOS-native ones, though.
Although Android Studio is _based on_ IntelliJ, you need to get actual IntelliJ if you want to use the plugin (Studio's component versions don't match the ones that the Flutter plugin works with, I think). Also, if you have Studio 3.0 canary installed (like to futz w/Kotlin, heh!), you need to either configure Flutter to look for the stable Studio 2.3's copy of Gradle (flutter config --gradle-dir=...) or just make sure 2.3 is located where the flutter tools look by default (~/android-studio for me on Linux). People working on Flutter helped some of us through this at https://github.com/flutter/flutter/issues/10236#issuecomment...
You get a lot of IDE-ish luxuries (as OP notes): Control-Space to offer identifiers, methods, or params available; autoformatting with dartfmt (right-click menu); lots of quick feedback when you mess something up.
Hixie (Ian Hickson) of the HTML5 spec works on Flutter which is kinda neat (he did RadioListTile just now! and there's a milestone on GitHub named 'Make Hixie Proud' haha :D). Outside of the tech specifics, Dart's an interesting creature in that it seems like it's got some key customers in Google (AdWords, so, like, the part that makes money) but comparatively little pickup outside. On Flutter GitHub you see people paying attention to outside-adopter issues (or even passerby issues such as my Gradle-version thing recently). There's apparently lots of tooling available publicly, e.g. a package manager (pub), dartfmt, IDE plugins, a playground (dartpad.dartlang.org) etc. Curious to see if there's any more pickup on the outside.
Given the big problems with these limits currently happening with bitcoin, this seems impossible to use in a high bandwidth system. Already bitcoin transactions are shockingly expensive, with the bulk of the cost hidden from the user by the payout of coins to the miners.
I believe it currently costs ~$1.50 in fees for a single Bitcoin transaction, assuming you want it confirmed reasonably quickly. Not what I would call nearly free!
If you check the hosts' reviews it turns out he is airbnb'ing out the hotel rooms of his father all over Israel. This is blatantly visible and yet Airbnb lets it. Samples: "The staff at the hotel" "The place is more like a cheap hotel or a hostel (not a "home" like other Airbnb places I've rented in the past). "
My trip was awesome nonetheless, brief report http://www.flyertalk.com/forum/25981843-post8.html here.
Not host, nor guests are legally related to AirBnB. Hence, none of the parties is really protected, even if ABnB talks about "insurance".
Not even the help desk is AirBnB. As they call them, they are "community helpers", and their help is not legally linked to AirBnB.
They changed the conditions just because, and they place them as you go in for you to accept them. If you disagree, you have to remove yourself from the platform via email.
They promise a plan of prizes for good hosts. It seems that most of your ratings should be high. It turns statistically impossible once your go into the real conditions: must be evaluated by +80% of your hosts, and +80% of your score must be five stars (or similar).
They don't pay your social security, welcoming time, help, and after all anything we call "added value".
Nevertheless, they added surreptitious-yet-public evaluation for things nobody is paying for, like being the tourist guide of someone who is getting a room for 10/night.
They do not care about hosts opinions, in spite they are the ones putting the real value on the platform, paying taxes for it, doing the face-to-face with the end client, etc. If your guests arrive 8 hours early and they complain you didn't received them, the bad scoring is on you (happened). If your hosts leave a mess behind, AirBnB evaluates if they should cover it for the shake of their public image, or if they rather claim that according to the rules that is not covered.
Everybody is free to do whatever, but after my experience I trust hotels more than anything.
I am really surprised. Things are going well for them as a company. Why do they want to screw it up for themselves?
The larger the company, the more it's going to decay and suck. The degree to which it does is a function of management style and org structure.
Of course, Kool-Aid doesn't help mattersespecially at scale where it rings hollow.
That's 3 selection biases here: Visibility on HN(selects for an "interesting" story, not truth), disgruntled employees, comments will want to rant about AirBNB. It's almost certainly a mistake to judge them negatively based on this story.
> In 2015, Glassdoor ranked the company as the #1 place to work, in 2017 that ranking dropped to 35th, and many employees are speaking out.
35th "best place to work" and "people are treated like cattle"? Somebody's giving you misleading statistics: Either it's Glassdoor.com, or the managing editor of "broke-ass stuart".
I don't think there's a single company above 10,000 employees that you couldn't force a similar framing on.
Well, that's what they did! All UI events and layout on the original iPhone were handled on the main thread. I doubt asynchronous layout or event handling would have improved the experience on its single-core CPU.
The key technical advantage the original iPhone had was Core Animation, which composited the laid-out views and applied animations to them in a separate process. It ensured that all views would appear at the correct position in their animations each frame with no jitter, and kept most of the per-frame work in one place. But the animations were all initiated on the same main application thread that handled events, performed layout, and so on.
R0 0000BEEF R1 000BEEF0 R2 000000BE R3 00000001 R4 BEEF0000 R5 00005F77 R6 00000000 R7 00000000 ...
The other thing that I'd suggest as an improvement is a 16-byte-hexdump mode with ASCII on the right for "View Memory Contents". Other than that, with perhaps the exception of "infinite loop detection", the other features look useful.
Its front was just thrown together though, and it has a few issues. It would be nice if it ever develops into having a rich front end and progresses towards what VisUAL has.
Maybe some code samples could be a good addition.
I used a similar program to learn assembly, GNUSim8085, targeting the Intel 8085 architecture and is packaged for most distros.
The smaller the holes the simpler the recovery, but clearly the consequences are a lot more dire than they lead on. I was under the impression that the material was extracted at the point of removal, not through a separate mechanism. Sounds like there's some significant contact time and loss of material.
Which makes one wonder what kind of surgery one of these would be useful for. Precancerous cells? Nope. Infection? Same problems.
"Amy Reed, Doctor Who Fought a Risky Medical Procedure, Dies at 44"
Why was her name removed?
PS: How are you generating these pages? Jekyll? something else?
EDIT: This probably needs editing -- "nor will reading the posts will not teach you how to write good specifications;"
I think you probably meant to write "nor will reading the posts teach you how to write good specifications;"
The reason I ask is Larry's attempt buffered ref counting surely has implications for single-threaded code that maybe relies on the existing semantics -- e.g. a program like this may no longer reliably print "Deallocated!":
Python 2.7.13 (default, Mar 5 2017, 00:33:10) [GCC 6.3.0 20170205] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> class Foo(object): ... def __del__(self): ... print 'Deallocated!' ... >>> foo = Foo() >>> foo = None Deallocated! >>>
Similarly, what about multi-threaded Python code that isn't written to operate in a GIL-free environment -- absent locks, atomic reads/writes, etc.? At best, you might expect some bad results. At worst, segfaults.
Are these all bridges that need to be crossed once a realistic solution to the core GIL removal issue is proposed? As glad as I am that folks are still thinking hard about this problem, I'm personally sort of pessimistic that the GIL can be killed off without a policy change wrt backward compatibility. Still, I do sort of wonder if some rules of engagement wrt departures from existing semantics might help drive a solution.
I suppose this could be broken by injecting in a unique visitor id that would hash to something with an absurd amount of zeroes? That's assuming that the user has control over their user id and that I'm understanding the algorithm correctly.
If that's true why did they hide vote numbers on comments and posts? It used to say "xxx upvotes xxx downvotes" now it just gives a number and hides that.
>> Nazar will then alter the event, adding a Boolean flag indicating whether or not it should be counted, before sending the event back to Kafka.
Why don't they just discard it instead of reputting the event back to Kafka?
I guess what would happen if this person becomes permanently burnout or suicides because of this long exposure; my understanding is that the employer is responsible for that unless the contract has some compensation and mitigation for the effects on the worker's health...
Perhaps Google should consider doing the same?
Increased policing -- people fear repercussions for posting disturbing content. The darkest corners live on.
Increased exposure -- people share the psychological burden of knowing disturbing content exists, and develop meaningful discussions and coping mechanisms.
These companies have just been hiding behind the free speech folk with their heads buried in the sand about the long term effects of all this content.
If we can figure out ways to get our govts to tell us know how many policemen we need and the process it takes to become one, there is no reason Silicon Valley should be doing this vital policing in secret.
Aside: After reading that article, henceforth every time Google drops a product or weirdly changes focus/direction, I am going to say "Google is shaking the bear again".
We met on here last year discussing my quantum game. Shameless plug for everyone else:
(Sorry about expired security cert!)
Edit: Seems it's mobile only, or the mouse drag event handling is masking the click event. That is, it rotates for me on mobile, but not on my desktop browser (chrome 58.0.3029.110).
1) you should add some explanation about why making light to through a number of crystals (they slow it down by 1/4 wavelength) lets the beam go though a beam splitter in one or two directions.
2) Pinch to zoom works but the photon beam moves on a path that's not affected by the zoom. Firefox Android, I didn't check with other browsers. But it works on a 4,7" screen, which is great.
If someone plays past the first few levels, you can be quite sure he/she is not afraid of some explanation about how stuff actually works, and having that would have made the game a lot more fun for me.
Also, in the Sagnac-Michelson-Morley level, you're supposed to place a "Sagnac-interfRerometer" somewhere (hint: it's the vacuum jar, and that's a different thing). ;)
It's mostly the polarization (and phase to a lesser degree) that I am referring to. That stuff is way more intuitive on a real optical table.
Those were the days really. Barely having started my English studies I went looking for a single piece of information and found so much else on the way.. The tinkering and nights of frustration gave insights and a feeling of accomplishment that set me on the path to the profession I have today.
Then I started high school and someone showed me a shaded and textured cube they had rotating on their screen and I found out what math is good for :)
It's fairly new - started about a year ago, but there's quite a lot of stuff in there already (bootstrapped from the old XML stuff, I believe).
Funny, if I had to summarize it in one sentence I'd describe it in the opposite way: Bayesian inference is a way of making less sharp predictions from your data, with quantified uncertainty.
If you're looking for a place to start I'd go to Andrew Gelman's introduction for the Stan Language:https://www.youtube.com/watch?v=T1gYvX5c2sM
There are Stan implementations in R, Python, Julia or you can run it in C++ since it's written in C++. I think this has greater potential to change how we deal with the unknown than AI or other machine learning.
> given some condition on a distribution of distributions, when do we feel that a guesser is taking too long to make a choice?
This is like a person who is taking to long to identify a color or a baby making a decision about what kind of food it wants and waiting for it to do so. For a certain interval, it makes sense, but after a point it becomes pathological.
So for example if we have two distributions,
> uniform distribution on the unit interval [0,1]; uniform distribution on the interval [1,2]
then we get impatient with a guesser who takes longer than a single guess, since we know (with probability 1) that a single guess will do.
Now, if we have two distributions that overlap, say the uniform distribution on [1,3] and [0,2], then we can quantify how long it will take before we know the choice with probability 1, but we can't say for sure how many observations will be required before any agent capable of processing positive feedback in a neural network can say for certain which one it is. As soon as an observation leaves the interval (1,2) the guesser can state the answer.
Now, things can get more interesting when the distributions are arranged in a hierarchy, say the uniform distribution on finite disjoint unions of disjoint intervals (a,b) where a < b are two dyadic rationals with the same denominator when written in lowest terms.
If a guesser is forced to guess early, before becoming certain of the result, then we can compare ways to guess by computing how often they get the right answer.
Observations now give two types of information: certain distributions can be eliminated with complete confidence (because there exists a positive epsilon such that the probability of obtaining an observation in the epsilon ball is zero) while for the others, Bayes theorem can be used to update a distribution of distributions or several distributions of distributions that are used to drive a guessing algorithm. A guess is a statement of the form "all observations are taken from the uniform distribution on subset ___ of the unit interval".
Example: take the distributions on the unit interval given by the probability density functions 2x and 2-2x. Given a sequence of observations, we can ask: what is the probability that the first distribution was chosen?
The answers to these questions can be found in a book like Probability : Theory and Examples.
The crucial difference is that statistical inference does not consider any causation, its domain is observations only, and observations only cannot establish a causation in principle.
Correlation is not a causation. Substituting a Bayesian inference for a logical inference should result in a Type Error (where are all these static typing zealots when we need them?).
This is, by the way, one of the most important principles - universe exist, probabilities and numbers does not. Every causation in the universe is due to its laws and related structures and processes. Causation has nothing to do with numbers or observations. This is why most of modern "science" are non-reproducible piles of crap.
Any observer is a product of the universe. The Bayesian sect is trying to make it the other way around. Mathematical Tantras of the digital age.
They basically took the same idea and instead of just producing a hack finished a nice full version of the idea - very nice write up!
But, they don't have any web UI for it yet, so I am getting tempted to revive my dormant side project and likewise make a proper polished version of it with a web interface. If anyone on here is a talented web dev with an interest on working on this as a side project (for free, and for fun, though we could explore monetization if it works), feel free to get in touch! (ps extra get in touch if you are near south bay area/Stanford)
PS Incidentally, I independently came up with the same idea as in Deep Interactive Object Segmentation (https://arxiv.org/abs/1603.04042) and implemented it for my Stanford CS 229 (Machine Learning) project first - the hack came later. The ObjectCropBot hack allows only cropping and not clicking because it was faster to hack in that, but I think it is ideal to allow both cropping to constrict around target object and clicking (clicking alone leaves too much ambiguity , eg do you want to crop the person, or their shirt, or a subset of that shirt).
Are you available for consulting? I have a project in this domain that you may be interested in.
Uncaught Error: This demo requires the OES_texture_float extension
I've seen this before though on Reddit, pretty cool how well it works on Mobile.
Also I realize OP may not be related to the post/created it and I should file a bug or whatever, I'm just posting it here.
Sure, you can en/decode yourself, but faster in silicon.
Sure, there's openCL, but it's more fiddly, gives worse performance, and no-one uses it.
I say this because while I seem to observe this prison-is-terrible-the-convicts-need-compassion, not one person here has offered to help me in any meaningful way, even though I have documented my trials and tribulations over and over .
I have a 30 year history of software development, with 14 or so with the LAMP-stack. No one reading this is willing to even talk about some side project or prove-yourself 2 week gig? Ok right I get it...this isn't a help wanted or job board fine that's cool.
But still, I'm not getting it anymore...is everyone just into some sort of bullshit social signalling exercise or, perhaps worse, are willing to try to help ex-cons as long as they are funneled into low paying exploitative back-breaking jobs with no future that almost surely will lead 95% back into crime?
If so, can we start being honest about that's what all this discussion is about..."boy someone should sure do something about how screwed these people are but hell no it won't be me."
Yeah, so I'm frustrated and scared and broke and all that, so try to forgive my rant...I'm sure I'll come to regret it as I do so much else in my life.
 https://postmoderncoder.svbtle.com/fear-and-jaywalking-in-la... https://news.ycombinator.com/item?id=14394324 https://news.ycombinator.com/item?id=14302656
I think that's the moral dilemma with prison systems. It's easy in abstract to say that we should just focus on rehabilitation and take this utilitarian argument about what's best for society. But I know that if, for example, someone were to harm my children, I would have trouble being convinced that that person needs free college and housing (partly paid for by me). Even if that statistically led to a better outcome for society, it would not seem like justice; rather it would seem that person is being rewarded for harming my family. This I think is a general problem with utilitarianism - that when we just focus on group outcomes, we sometimes lose sight of things like individual rights and justice, messy moral concepts that don't always create optimal group results.
Maybe there is some way to do both things or differentiate between types of criminals. I don't really have a solution. Just posing the conundrum.
My point is, what works for Norway (population: 5.2m, prison population: 3874 or 0.0745% ) is never going to work for Brazil (population: 207m, prison population: 659020 or 0.31% ).
I like the idea of these idyllic prisons but inmates that will fit those are the exception here. Nevertheless, the system should offer them and help good inmates to be removed from the terrible traditional prisons so they don't become worse. It's often said that prisons are like college for criminals.
I don't know if this system could handle things like this: http://www.aljazeera.com/news/2017/01/60-killed-beheaded-gri...
In summary, I love the idea but let's not pretend that by just having those prisons that things will change drastically. It's a complex situation and there are problems everywhere (bad laws, slow courts, poverty, etc).
0 - http://www.prisonstudies.org/country/norway
1 - http://www.prisonstudies.org/country/brazil
But let's zoom up to the bigger syndrome. Notice the author quotes at least one offender, but doesn't bother talking to any corrections officers. Did it occur to him that someone who worked in a prison for 20 years might know a little bit more about corrections than someone who read a bunch of studies and statistics?
Symptomatic of a broader problem - the chattering classes, who consume and generate information, are increasingly cut off from the real world, and increasingly influential. Of course it's easy to have opinions about how something "should" work when you have no experience and no skin in the game.
Rubbish. In the county where I live, Essex in MA, inmates are given the clothes they were wearing when arrested and a ride to the courthouse where they were convicted, and turned loose. Pity the guy arrested in May who gets out in January. They shoplift at the local Marshalls on their way home. I wonder why?
Singapore has much lower crime stats than Norway and the USA [1,2]. Let's take a look at how Singapore treats its prisoners.
The punishment for even minor crimes (like graffiti) includes caning. They stick you in a prison cell for months, and on some random morning, they will wake you up and give you the sentenced number of hard beatings to your backside. The beating is done by someone with specialized training to inflict maximum pain (while remaining safe). So for months, every day you are scared, never sleeping soundly, as you don't know if this will be the night of your beating.
-Would the graffiti rate in the USA go up or down if the USA imposed the same penalties as Singapore?
-Would reducing recidivism rates by 20-50%, as the article claims possible, really be enough to lower crime in the USA to a OECD average level ?
-Norway and Singapore each have ~5M people. Singapore has 130 rapes a year, Norway has 1,000. How do you justify leaving the Norway justice system in place to the additional 800 rape victims in Norway, when a better system for reducing crime has been invented?
Maybe the US is stuck in middle-no-mans land that leads to bad outcomes. To address this, they could either make prisons into hotels/universities (Norway) or impose stricter penalties (Singapore). But if someone did something terrible to one of my family members, I know which system I'd prefer.
 Of course, it's never fully accurate to measure systems by comparing numbers across different cultures/measurement systems. The main point remains though.
"Oregon, which insists that programmes to reform felons are measured for effectiveness, has a recidivism rate less than half as high as Californias."
Assuming it's not a statistical blip, I wonder why Oregon is so different to California. Seems to me that a politician who promises to reduce the recidivism rate and thereby save taxpayer dollars would get more votes.
Privatized prisons (in the USA) are money makers, holding mostly low to medium risk offenders, you can even buy shares on the stock market.
With more police on patrol there will be less crime. Spend less money on prisons and more money on local police force.
The Renew Act of 2017 is trying to expand the age limits for expungement of records of first time offenders. its a start but there are more opportunities to fix the system post prison too. you don't even have to go to prison to have a record that prevents you from being productive in society.
one of favorite examples are the volunteers for smoke jumping, putting out forest fires. there are states where its illegal for a person who did this job in prison to obtain the same outside. if we keep up the barriers where do we truly expect people to go?
Another portion sees at as constitutionally-granted slave labor or an opportunity for profit.
That being said, one concept I rarely see discussed is the use of basic income as an incentive against crime, particularly violent crime: if you lost your citizens dividend after conviction and slowly earned it back every year upon release that would act as a powerful and immediate incentive to avoid violence.
There is a common confusion between salt (sodium chloride) and sodium, many falsely believing salt is not used only for taste but it's essential for health.
Sodium is a vital element that is found in almost all plants and animals and there is no need for an extra sodium intake because our food has plenty.
Sodium and potassium balance in the body is essential for cell physiology and our healthhttps://en.wikipedia.org/wiki/Na%2B/K%2B-ATPase
I don't add salt in my food and avoid products with salt for more than a year, "heavy food" doesn't feel as heavy, my skin is not as dry as before and looks better and injuries seem to heal faster.
I'll be interested in reading the articles the newspaper report is based on, but ATM not sure how much the new findings contradict (vs. extend) what's thought to be true about the physiological roles of sodium. In terms of implications for health issues, conditions like hypertension are enormously heterogeneous in origin, salt intake being only one factor among a huge number of factors involved.
I was interested in the comment that high salt intake was potentially adverse for bone health via glucocorticoid stimulation. One thing I've recently learned is how high dietary sodium negatively affects bone calcium balance via mechanisms within bone cells, and in some people, excessive renal calcium excretion as well. These issues aren't AFAIK primarily mediated by elevated cortisol. So it seems to be suggesting another way high sodium intake promotes bone loss.
Goes to show we know a lot less than we think we know, in this case about body regulation of essential minerals like sodium and calcium. When we realize that also applies to every other factor we think related to high blood pressure or osteoporosis, it's very humbling indeed.
Maybe I'm missing some of the subtleties here?
Is anyone familiar with studies that study the health effects of salt with this in mind? I.e. keeping food amounts constant over different groups, while varying only salt content?
IIRC they thought it was an interesting observation since it was against all common understanding, but I don't think they ever came close to a solution to why.
That's odd, what if simply a high-level of sodium reduces your perspiration? Wikipedia says "The volume of water lost in sweat daily is highly variable, ranging from 100 to 8,000 mL/day." so between 0.1 and 8 liters per day!  even says 10 to 12 liters per day!
If on average over a long period the subjects drank let's say exactly 1 liter per day and peed strictly more than 1 liter of water, then I'd agree with what this Dr. said, but neither this nytimes article, nor the 2 papers mentioned  mention the word perspiration or sweat in their abstracts/summaries, why not?
Off-topic: Did nytimes.com made it hard on purpose to select text from their article? On Chrome and Firefox I can't select text easily, only Edge lets me.
The only change for me (besides of course a period of adjustment where food didn't taste like much) has been a significantly lower blood pressure (from typically 145/85 to 105/65), everything else has remained pretty much the same.
And cystic fibrosis, which is perhaps best tldr'd as a salt wasting condition, frequently leads to Cystic Fibrosis Related Diabetes, which is neither type 1 nor type 2 diabetes.
Not really true at all. So many times I've found a dead link, archive.org doesn't have a copy, it's gone. Entire domains loaded with content have disappeared. In general, people don't copy and save other people's material, except temporarily for viewing.
This truth is often hidden by some given abstraction.
(file, save, download, streaming, etc.)
Businesses have been built on such abstractions. Success stories.
On the flipside, existing businesses that were built before the internet who do not know the truth have been fed these abstractions. These businesses may stand nothing to gain from participating in the copy machine. Whomever is feeding these businesses with abstractions that hide the truth are not helping these businesses. They are helping themselves and watching these businesses being destroyed by a copy machine.
One category that he missed is embedding digitized products like software into dedicated hardware. It's a form of DRM that is harder to crack (I think).
Here's some examples from the audio world, that are variations of this idea.
The main audio software platforms, known as Digital Audio Workstations (DAW) have all evolved to a point where they have to support open plugin formats. Plugins are implementations of digital signal processing software, that are used in combination within the DAW to produce the end result, which is a complete, finished sound file,
Because audio production and engineering is hard (basically, things tend to not sound good) there is constant development, and fierce competition, in this niche market.
Various forms of DRM, or licensing systems, are almost universally used. This provides enough friction, meaning you can get cracked versions of most plugins, but it comes at a cost of inconvenience, malware, or compromised stability, that a modest number of small companies have built business in the market.
But the competition is fierce, and the trend in license prices has been steadily down. The cracks do hurt the sales.
One company that has thrived in this market is Universal Audio. They put heavy development into making premium, we'll respected plugins, but they only run on their proprietary DSP systems. For a while, this could be seen as a genuine advantage, as users commonly ran up against the limitations of their CPUs.
This is no longer the case, but the company has steadfastly stuck to their proprietary system. One technique they used was to embed their DSP in dedicated sound interfaces.
The sound interface market is also hotly contested, and companies are constantly fighting against commoditization. So they developed high quality sound interfaces, which is something all audio producers have to have, and use their catalog of exclusive software plugins as a "value added differentiator."
CEDAR is the pre-eminent developer of specialized software dedicated to challenging issues of noise reduction. For a long time, they limited the use of their algorithms to their own DSP hardware. If you wanted these industry best algorithms, you had to buy their, relatively, expensive systems. While they now do offer some of their software as plugins, they continue to use dedicated hardware as part of their product strategy.
One interesting possibility is the embedding of otherwise unremarkable software into dedicated hardware, because of the user interface advantages. By giving the user access to physical controls, that do nothing but mimic their virtual cousins, the goal would be to dramatically increase the usability of the software. There has been some movement in this direction, which is actually a kind of throwback to how the first generations of audio DSP devices, back when dedicated hardware was the only way to implement such processing.
You can see this tension around user interface play out in the realm of audio mixers for life performance. They use dedicated interfaces to run the real time DSP, but combine various virtualization strategies. Some of the biggest audio plugin companies, like Waves, have released versions of their popular software plugins to run on some of the modern live mixing hardware systems, thereby generating new revenue streams from existing products.
At this point, while it is entirely possible to run an entire mix of a live show on a PC with a mouse and keyboard, it is such a sub-optimal user experience, that I have never witnessed anyone do this. (Though I'm sure some foolish, Braveheart do, and budget challenged audio engineers have done it!)
> Careful readers will note one conspicuous absence so far. I have said nothing about advertising. Ads are widely regarded as the solution, almost the ONLY solution, to the paradox of the free. Most of the suggested solutions Ive seen for overcoming the free involve some measure of advertising. I think ads are only one of the paths that attention takes, and in the long-run, they will only be part of the new ways money is made selling the free. But thats another story.
Since this is (2008), does anyone know if that blog covered that "other story" at some point? I'm currently digging through a Google search for
advertisement site:kk.org inurl:thetechnium
I mean, for a movie, just wait a few months and you can download it.
If you don't want to wait, you still have to wait for the next movie to come out, so it makes no difference really.
To say it another way: no matter where you are in the pipeline, you still have to wait the same amount of time for new data to arrive.
> Over the course of a year, Google quietly turned its map inside-out transforming it from a road map into a place map.
I've long been amazed how we somehow transitioned during the early 20th century from a mental model of roads and paths running through locations to places (house lots, etc) being the spaces between the roads. It's a natural thing to happen, but one of those invisible flips that happens on a timescale longer than a human lifetime.
But this anticipates the opposite: if you can stop worrying about how to get somewhere (because you don't have to drive or plan much -- self-driving or Lyft-style services can take care of the route planning) you can focus on the destination.
We see this phenomenon in subway maps which are famously schematic and not geographical.
(BTW the transformation is visible in literature, which is how I noticed it. The sense of geography in, say, Jane Austin is completely alien to today).
At the moment, Apple Maps seems to have a more thought-through design for public transit than Google Maps. Which is to say, transit view in Apple Maps is either visually clean and uncluttered, or completely nonexistent, depending on whether they got around to adding your city. Clearly a lot of by-hand design work goes into it, which isn't a very scalable approach.
On the other hand, transit data in Google sometimes appears to have been munged with no human intervention and never received even a cursory check by a graphic designer. For example, turning on Transit view in downtown Toronto will show a mess of ungodly rainbow spaghetti which is meant to represent the streetcar system. There are lines on non-revenue tracks where no streetcars actually run, lines on streets that don't have streetcar tracks, random artefact lines that appear and then vanish two blocks later, and lines drawn diagonally through the middle of High Park where there is no street at all. Somehow, the data behind this spaghetti is diligently updated year-after-year (e.g. the new Cherry streetcar was added in 2016) without anyone involved in the process noticing that the results are hideously garbled.
It also took them about a decade to realize that the SkyTrain in Vancouver is a rapid transit system.
I'm currently using some offline cached maps tiles downloaded with MOBAC, and waiting for Google to change their colours back again. Now I realise that it's a place map, and places generate advertising revenue, I think that Google is unlikely to fix that.
Another problem that happened recently in Taiwan was when Google removed pinyin (latin letters) from the street names, leaving only Chinese characters. Foreigners living here couldn't find their way around. I threw together a quick alternative to GMaps, and told people about it - until Google put the pinyin back about a week later.
How can Apple catch up? Is there an obvious acquisition?
Back in the day before labels I would star places I needed to "bookmark" regardless of importance in time and how ephemeral that mark was. Then when labels appeared I thought that was ideal to mark places which are always important (because I can personalize the label) as opposed to a generic star which most likely meant a temporary bookmark.
It seems that with this new Google maps, the stars always get display priority (it's shown even at smallest zoom level) whereas labels only appear at a algorithmically defined location (which seems arbitrary).
All this time wasted in personalizing my map.
The big problem which seems to make it not worth expanding your dataset for graphical maps, is that it is quite difficult to display a lot of data, and still be easier to read than an aerial photograph.
Nope. Try dragging Pegman over anywhere in the rural Midwest and see what you get. 99% of US _paved_ roads, perhaps.
"Three different looks? Whats going on with Google Maps design?"
A/B testing perhaps?
Now if only they would sell all my communication data behind my back, or at least give it away to the NSA, this would be the PERFECT deal. Do you think there's a way to make sure my money goes to fight net neutrality?
Pretty sure the earliest was Grand Central in 2005-2006: https://techcrunch.com/2006/09/25/grandcentral-could-make-ph...
Google bought them and relaunched, but it wasn't an "experiment" so much as a rebranding / UI refresh.
Probably a poor copy of iMessages, but without the encryption.
Around here everyone has LINE, and I make more Skype and LINE voice calls in a month than I do phone network ones. The phone network is basically reserved for calling business for me.
.exe files created this way wouldn't launch, but a .com created this way would instantly reboot the computer. Even more weirdly, this only worked for the specific command that happened to enter in my first attempt -- it might have been something like "cd games". If I changed anything at all, it no longer worked. Only years later I realized that the characters I typed were directly executed as machine code, and that I must have stumbled upon a set of instructions that caused a CPU exception!
For several years, this weird file was the only way I knew to programmatically reboot a computer, so I renamed it "reset.com" and kept it around. I think it stopped working only in the Windows XP era when DOS programs were finally sandboxed to some extent.
This makes any small file a valid COM file as far as Windows is concerned. NTVDM doesn't care, it will happily execute your holiday snaps if given the chance. It's not difficult to craft a valid GIF, PNG, etc that does something useful when executed from byte 0.
Such an image will pass most mime-sniffing protections. For example, given such an image and a "foo.png.exe" Content-Disposition header, Internet Explorer used to skip all security warnings. Combined with "Hide extensions for known file types" it would ask you where you'd like to save "foo.png", preserving the executable extension behind your back.
Upon double-click the loader notices the MZ signature is missing, fires up NTVDM, and starts executing the image from byte 0. If running under NTVDM is too restrictive, it can always break out with BOP instructions.
The lack of structure also makes COM files a simple vector for exploiting hash collisions. Any two prefix blocks with matching hash that can survive execution can be used to create two variants of an program with matching hashes. Bit differences in the two blocks can be used as switches to control program behaviour.
The corruption mangles the header, removing the "MZ" magic number -> Windows thinks the file is a COM executable and attempts to load it into memory, COM style -> the file is not actually COM though and thus is far too large to fit into the memory available to the loader -> that error appears.
Some simple things I remember playing with in middle school, might work in a dosbox emulator window?:
mov ax,0x13 int 0x10
Some could low level format hard drives, and there were lots of other fun utilities and amusements, much more so than batch files. I'd say the modern analogue is python scripts.
(yes, you can just copy-paste it into a .COM-file and it will execute and just show a text - unless you've got a correctly working virus scanner)
This is a Virus Scanner test signature (https://en.wikipedia.org/wiki/EICAR_test_file), so the fact alone that you can read this comment might be meaningful ;)
It's funny how engineers think those times are long gone whereas the demand for very compact systems with low energy footprint is still there. So mastering programming within kilobytes is still a thing, don't disregard it!
From my novice view, it seems like it would be generally simpler to reason about.
Today we have CocosX for cross-platform sprites so games can be quickly deployed on multiple platforms, plus Unity and Unreal engines for 3D development. It's much better today, but I wonder what it's like to not have the kind of background of learning that I did.
(and you can also run code out of DLL and CPL files)
Either "Compatibility" is a time, or they should have asked "why".
Yes, it sounds like a COM file, with no relocation dictionary, etc., would have to be loaded at the same virtual memory address. IIRC one of the OP comments mentioned this.
Uh, more of the same, piled higher and deeper?
Okay, for my startup I decided to use the .NET Framework 4 with Visual Basic .NET and the usual acronyms SQL (structured query language) Server, ADO.NET (active data objects for using SQL Server), ASP.NET (active server pages for writing Web pages), IIS (Internet information server, which actually makes the TCP/IP calls for moving the data on the Internet and runs my VB.NET program that writes the Web pages), etc. Okay.
But eventually it dawned on me that there is a lot of code running on my Windows machine that very likely is not from Visual Basic and .NET; I'm concluding that there is an older, say, Windows 95, 98, NT development environment based on C++ and a lot of library calls, a message queue for inputs, some entry variables and call backs to them, and quite a lot of functionality not also in .NET.
E.g., when my HP laser printer quit and I got a Brother laser printer, my old HP/GL (Hewlett Packard graphics language, a cute, simple thing for printing and drawing on pages) program to print files didn't work -- the Brother printer didn't do with HP/GL just what my HP printer did.
So, via .NET I wrote a replacement for such simple printing. Well, for writing the characters to the page, object, image, whatever it was (I don't recall the details just now), all I could see to use was some GDI+ (graphical data interface or some such?) calls, but those seem not to be the usual way old Windows programs write to paper, graphics files, or the screen and instead there's something else, maybe back before .NET, and I don't know what that older stuff was. E.g., the GDI or whatever it is didn't let me actually calculate accurately how many pixels horizontally a string of printable characters would occupy when printed and, thus, all my careful arithmetic about alignment became just crude approximations -- no doubt Firefox, Chrome, Windows Image and FAX Viewer, etc. all use something better, more accurate.
So, what am I missing? Am I supposed to get out some 20 year old books by Petzold or some such, get a Windows Software Development Kit or some such, review C and C++, get lots of include files organized, etc., start with some sample code that illustrates all the Windows standard things of a standard Windows app or some such?
Okay, but it appears that .NET does not yet replace all that old stuff?I mean, is it possible to write a full function Windows app with just .NET, VB.NET/C#?
Q 1. What really is that old stuff? Where is it? Are there good tutorials? E.g., if we are to explain COM files, then let's also point to explanations of some of the other old stuff also still important?
Q 2. Is the functionality of that old stuff, for user level programs, maybe not for device drivers, by now all available via .NET but I've just not found it all yet? If so, where is it in .NET, say, for graphics and printing the .NET presentation thingy or whatever?
The code for my startup looks fine, all seems to work, does all I intended, but still I wonder about that old stuff. E.g., eventually I noticed that my favorite editor KEdit, apparently likeany well behaved old standard Windows program, will let me print a file. And the printing works great! I get a standard Windows popup window that lets me select font, font size, bold or italic, etc., and it works fine with the Brother printer. KEdit knows some things about writing to a printer I didn't see in .NET. So, with just .NET, I'm missing a lot of Windows functionality?
EXE files have a header which tells the loader whether and how they use different segments in memory. They can have much larger memory footprints than COM programs.
I know that web technologies are all the rage these days, but at least for static, publication-ready graphics, Grid is really nice substrate, with well thought out lower-level abstractions.
EDIT: I should also add that it's documented within an inch of its life should anyone feel that it's worth recreating: https://stat.ethz.ch/R-manual/R-devel/library/grid/html/grid...
I also want other packages to be able to build off of plotnine, e.g. a package with the functionality of Seaborn could be built off of plotnine. The only constraint should be whether the backend -- in this case Matplotlib -- does stand in the way. Matplotlib is evolving (though slowly) and has a very receptive community so there is lots of hope.
* - Many people contributed to its history.
As an alternative that preserves the full power of Wickham's implementation, pygg is a Python wrapper that provides R's ggplot2 syntax in Python and runs everything in R.
Chart(df).mark_point().encode( x='age', y='height', color='sex')
I don't mean to undermine your project, just wanted to know about significant differences.
Is it the way we concatenate functions to create what's essentially a sentence of what we want the plot to be?
If I'm to dig in the manual, I might as well build my plots with the standard syntax of any random plotting library.
Is this "grammar of graphics" any good if you invest more time in it?
2. You can be "superhuman" with just image sensors (i.e. without lidar).
That said, I believe lidar is effective for precise object determination e.g. whether a small thing on the road is soft or hard which would help determine if it's a stuffed animal or a real one or a plastic bag in the shape of an animal.
The edge cases for self driving cars are massive - unless we dictate our roads and street to be machine friendly.
- cost- redundancy- depth
Just a thought, not a scientist/engineer
edit: then imagine it could be made into strips and that goes around the car... no rotating-delay and full-view all the time.
It'll be interesting to see how SDC makers handle the computational complexity of deep nets operating on these massive point clouds generated from LIDAR. It seems like these aren't getting any easier with Luminar claiming ~10 million points. Running that through a hefty 3D ConvNet could easily soak up a petaflop...
Busy intersections could have strategically placed high resolution LIDAR "base stations" that wirelessly transmit the model to cars as they near the intersection.
Humans do all the time and we accept that risk, why are self driving cars held to a higher standard?
> How has your experience been compared to your previous tech?
Previous to using Nim I was primarily using Python. This was a few years ago now, but recently I was working on a project in Python and found myself yearning for Nim. There were multiple reasons for this, but what stuck with me was how much I missed static typing. The Python project used type hints which I found rather awkward to use (of course the fact that we didn't enforce their use didn't help, but it felt like such a half-baked solution). Dependencies very often required multiple guesses and searches through stack overflow to get working. And the resulting program was slow.
As far as I'm concerned, Nim is Python done right. It produces fast dependency-free binaries and has a strong type system with support for generics.
Of course, that isn't to say that Nim is a perfect language (but then what is). For example, JetBrains has done a brilliant job with PyCharm. Nim could use a good IDE like PyCharm and with its strong typing it has the potential to work even better.
> How mature is the standard library?
To be honest the standard library does need some work. In the next release we do plan on making some breaking changes, but we always lean on the side of keeping compatibility even though Nim is still pre-1.0. Of course, sometimes this is not possible.
> How abundant are third party libraries?
Not as abundant as I would like. Great news is that you can help change that :)
The Nimble package manager is still relatively new, but you can get a pretty good idea of the third party libraries available by looking at the package list repo.
Hope that helps. Please feel free to AMA, I'd love to introduce you to our community.
1 - https://github.com/nim-lang/packages/blob/master/packages.js...
I'm now using it extensively for a confidential computing and block chain project, which is quite exciting.
Having used Python, Go, C, Perl, Java, Nim is a breeze to code in. Occasionally the compiler glitches and you have to delete nimcache. Very rarely it fails to compile something and you have to rewrite few lines differently. Not an issue. Build frequently to avoid any surprise.
Not that much: it lacks examples and helper procedures that you would expect, yet I still feel more productive with Nim than other languages.
Look at the packages. Most of the basic stuff it's there. For small and medium projects it's usually not an issue, occasionally I have to wrap functions from a C library.
If you are looking for big, fancy libraries like Pandas and Sklearn, they are just not there. Use Nim for tool and services instead.
(As you can see, I was one of the authors of that library in a previous startup. We haven't worked on Nim-Pymod in a while, alas -- I've been focused on the new startup! -- but Nim-Pymod is sufficient for our needs right now.)
Our webserver main-loops are in Python; our number-crunching ML/CV/img-proc code is Python extension modules written in Nim.
As a C++ & Python programmer, I'm a huge fan of Nim, which to me combines the best of both languages (such as Python's clear, concise syntax & built-in collection types, with C++'s powerful generics & zero-cost abstractions), with some treats from other languages mixed in (such as Lisp-like macros and some Ruby-like syntax). I find Nim much more readable than C or C++, especially for Numpy integration. I also find Nim much more efficient to code in than C or C++ (in terms of programmer time).
And Nim is a very extensible language, which enables Nim-Pymod to be more than just a wrapper. For example:
1. Nim-Pymod uses Nim macros (which are like optionally-typed Lisp macros rather than text-munging C preprocessor macros) to auto-generate the C boilerplate functions around our Nim code to create Python extension modules.
2. Nim-Pymod provides statically-typed C++-like iterators to access the Numpy arrays; these iterators include automatic inline checks to catch the usual subtle array-access errors. Nim macros are themselves Nim code, which can be controlled via globals, which in turn can be set by compiler directives; by compiling the Nim code in "production" mode rather than "debug" mode after testing, we can switch off the slowest of these checks to get back to direct-access speed without needing to make any code changes. (And of course Nim's static typing catches type errors at compilation time regardless of the compilation mode.)
3. Nim exceptions have an informative stack trace like Python exceptions do, and Nim-Pymod converts Nim exceptions into Python exceptions at the interface, preserving the stack trace, meaning you have a Python stack trace all the way back to the exact location in your Nim code.
Earlier on in our development of Nim-Pymod, there were some occasional headaches with Nim due to its in-development status. Occasionally the Nim syntax would change slightly and that would break our code (boo). We've also debugged a few problems in the Nim standard library. I suppose these problems are an unfortunate consequence of Nim having a small set of core devs contributing their time (rather than being supported by Microsoft, Sun, Google or Mozilla). Fortunately, these problems seem to have stabilised by now.
The Nim standard library is reasonably large, somewhere between C++ STL (data structures & algos) & Python stdlib (task-specific functionality). I recall that the stdlib could use some standardisation for uniformity, but I haven't been watching it closely for the last year or so.
Third party libraries are not abundant, aside from a handful of prolific Nim community-members who have produced dozens of fantastic libraries (eg, https://github.com/def- , https://github.com/dom96 , https://github.com/fowlmouth , https://github.com/yglukhov ).
I'm happy to answer any other questions about using Nim in production!
Lead vessels were traditionally used (and may still be) to make certain dishes (like rasam, a watery lentil-based sour soup) in South India. No idea about any benefits or the reverse.
Update: Also, I've seen relatives of the previous generation to mine (in India), sometimes eating from silver plates. Not sure, but I think that may have been for some supposed health reason too (seeing sibling comment about silver reminded me of it).
Both copper and silver oxidize unpleasantly very quickly, though.
(Device spoiler: Water in glass bottles with copper coils.)
As for the comment about silver: it does have some antimicrobial properties. I'm amazed the effect is noticeable at the macro level of eating utensils.
In the US as well there are various contaminants in water supplies all over the country from PFOAs to lead so it's a good idea to use a water filter or such. I don't think there's much danger of bacteria in US drinking water.
The only potential drawback I can think off is that some PET bottles seem to leak "hormone-like chemicals" . Tho, reading trough the comments copper pots also seem to have some toxic effect on the water.
As electrons flow through them, the germs are dying.
I love the thing, and I was drinking water out of it...
But one time I put alcohol in it... and I took a few sips from it and I almost immediately started feeling nauseous and weird... now I'm scared to use it, even for water.