"If your guy is involved in criminal activity and has to have criminal lawyers of the caliber of these two gentlemen, who are the best, well, okay they got the best. But its a problem I cant solve for you. And if you think Im going to cut you some slack because youre looking atyour guy is looking at jail time, no. They [Waymo] are going to get the benefit of their record. And if you dont deny itif all you do is come in and say, We looked for the documents and cant find them, then the conclusion is they got a record that shows Mr. Levandowski took it, and maybe still has it. And heshes still working for your company. And maybe that means preliminary injunction time. Maybe. I dont know. Im not there yet. But Im telling you, youre looking at a serious problem."
"Well, why did he take [them] then?". "He downloaded 14,000 files, he wiped clean the computer, and he took [them] with him. That's the record. Hes not denying it. You're not denying it. No one on your side is denying he has the 14,000 files. Maybe you will. But if it's going to be denied, how can he take the 5th Amendment? This is an extraordinary case. In 42 years, I've never seen a record this strong. You are up against it. And you are looking at a preliminary injunction, even if what you tell me is true."
Uber is having a very bad day when a Federal judge starts talking like that. A preliminary injunction looks likely. If Uber can't find anything, this goes against them. Nobody has denied that Levandowski copied the files. Uber paid $600 million for Otto's technology and people. Even if the files didn't make it to Uber's computers, Waymo can probably get a preliminary injunction shutting down much of Uber's self-driving effort. Then Uber gets to argue that their technology is different from Waymo's. It's going to be hard to argue independent invention when all the people are from Google's project.
Maybe both parties' intense desire for privacy in this matter has driven Google to this strategy.
The seeming ludicrousness of the result - Alsup's "go try again, harder this time" - is not caused by this case's parties playing badly. It is caused by poorly defined and understood laws surrounding what constitutes a defensible search. Data handling in this stage of legal proceedings is imperfect, and can be manipulated by both parties to drive up the cost of litigation, or to strategically avoid disclosing the key breadcrumb documentation that would otherwise have led to the smoking gun(s).
Edit: Please find the court reporter transcript here: http://www.documentcloud.org/documents/3533784-Waymo-Uber-3-...
Judge Alsup's comments are fairly aggressive in comparison to most commercial litigation, but the no-nonsense tone is par for the course.
"Nah, I didn't find anything. I found this plastic bag that looks like it mighta had something in it, but I'm pretty sure my friend left it here and it was empty when he brought it."
"Okay son, go search again."
If they just did the criminal trial first, he couldn't claim 5th protections, right?
We all have big dreams of starting our own company some day (I know do) and many of us work for big corporations that would rather we never go anywhere and work for as little as possible. (admittedly the markets are forcing them to pay us a lot but they aren't doing it out of good will).
The outcome of this will teach us all very valuable lessons. I can't be the only one who is a little paranoid that if I start my own shit I'll be sued or that I may even be sued for some of the side projects I'm working on even though I've never taken any code or resources from my company.
Judge Alsup always winds up with the most interesting cases :-)
"[The Judge] told Uber to search using 15 terms provided by Waymo, first on the employees computers that had already been searched, then on 10 employees computers selected by Waymo, and then on all other servers and devices connected to employees who work on Ubers LiDAR system."
Seems interesting that there's not a more comprehensive system or way to search for these since Google is clearly in possession of the specific documents they claim are stolen.
The way they're continuing the Judge's order to look for "15 terms" almost makes it seem like the extent of the original search was tied to file name or document titles or something?
Every Linux users should benefit from this decision; I am excited to see the progress they will make to the Gnome environment.
Love or hate it but Unity was IMO the best shot we had at getting an open source unified phone, tablet and desktop experience...and now this is effectively Canonical not only shutting down Unity, but refocusing efforts away from convergence and towards more traditional market segments. I mourn the death of this innovative path.
That said, hopefully this convergence with GNOME will eventually lead back to convergence...but for now that dream is dead it would seem.
I'm one of the ones who loves Unity 7, it's always been faster and less memory hungry than GNOME or KDE for me. I will just have to cling onto the LTS for as long as possible.
In the long-term I think this is good for Ubuntu and Linux users in general, less diversity can sometimes help an ecosystem form. I think many users just want a DE to stay out of the way and make life easier, so I hope some of the Ubuntu ease of use focus and community will get injected back into GNOME. I really hope a huge flood of users coming back forces them to look at their memory usage and get it under control.
Legitimately never thought I'd ever see this. Possibly the best thing that could happen for desktop Linux in this age.
Edit: and of course this would mean Ubuntu/Canonical and Fedora/RedHat basing their desktop OS's on the same platform, which can only mean easier development of desktop software and services.
Lately I've been hacking some projects on the Arduino Industrial 101, which has a small Atheros AR9331 MIPS based SoC, but it ships with a very old OpenWRT that doesn't seem to get much in terms of updates. I spent time building a new image for it based off of the LEDE fork that is being brought back into OpenWRT, then wrestled with various parts of the Linux side of the board, but ultimately I really don't want to be managing build pipelines for Linux system updates. At this point I may just wait for AVR changes to Rust to land and ditch the Linux side all together.
That's all to say, it really would be nice to have a small footprint embedded Linux distribution that wasn't mainly focused on routers like OpenWRT (not a knock on OpenWRT). Raspberry Pi and Samsung Artik 10 aren't really something I'd completely consider as embedded systems (also not a knock on either of those boards).
As far as I can tell, this just means that your only options are Android or iOS. It's not easy to get a Jolla/SailfishOS phone that will work on most Canadian or USA networks, and with this announcement it seems that Ubuntu phones won't be around for much longer. This coupled with the death of Firefox OS means that there's really not much of a choice. Certainly you can run AOSP with no Google Apps, but not having Google Play Services tends to cause more and more problems, or at the very least means your phone is less and less capable as time goes on.
I guess in general we can all celebrate that Ubuntu is moving to GNOME / Wayland and is ditching convergence, but I think the fact that there's no healthy alternative to iOS / Android is quite sad. If Canonical is exiting the mobile space to work on other things, what other alternatives do users have?
I might be a small minority, but I _like_ Unity 7. I have never used Unity 8, and I thought the Ubuntu phone was misspent effort, but wow. Now I have to figure out if there is a way to style Gnome to look like Unity 7.
I wonder what this means for Mir vs Wayland as well.
This is good news for linux on the desktop. Not that diversity isn't good, but Unity has been stale for years while Gnome has been progressing, but suffering from the fragmentation that Canonical caused.
Hopefully this will result in more contributions upstream, which will benefit all linux distributions. This was always the main complaint with Canonical.
This is huge and was my #1 request for the previous post for Ubuntu 17.10. Gnome on Fedora is amazing and I have had people walk up to me and ask me - what OS am I running ?
It is so much better for Ubuntu and Redhat to have joint stewardship of Gnome going forward rather than split energy on wasted competition.
My next biggest request is flatpak vs snappy - I cant believe that the package management wars are beginning all over again in 2017. Just pick one and be done with it. RPM and DEB will never converge, but we have a narrow window of opportunity with flatpak and snappy.
Canonical just didn't have the resources to push a 3rd mobile platform. Hell, even Microsoft gave up (who did have the resources, and IMO made a mistake in giving up).
Xiaomi is releasing the Mi6 in some weeks with 8 GB of RAM, so it's only a matter of time until we get comparable specs like we have in MacBooks or even Desktops in the smartphone form factor (which will be incredible in itself).
Disclaimer: I am really happy using Ubuntu everyday.
Everytime someone installs an image with 'Ubuntu' in the name the cloud provider has to pay Canonical? Or what else could be the main revenue stream 'in the cloud'?
I was not very happy when they decided to make Unity mac-like by moving the menu to the top bar. It has to be the most idiotic decision around for anyone who has multiple monitors. Even with a single monitor, it is a pain in the ass to move the mouse all the way to the top left corner for the menu to appear, and then orienting myself with the items presented.
WRT to unity, at least it's improved. Ubuntu w/ unity currently has the best multi-monitor multi-dpi user experience I've seen on linux. I have a retina laptop with 2 external displays and ubuntu unity is the only thing that just works out of the box.
I'm holding out at the moment, as it's missing one feature from Cinnamon that I really like (the ability to launch and control any audio player from the sound icon in the tray), but when Fedora 26 launches I may finally have to switch over.
I hope that Canonical shifting back to GNOME will further its development under Wayland and not spend a crapload of time doing more work for Mir.
I would be willing to spend a lot of money for a decent FOSS phone, but there just isn't anything out there. Ubuntu was my only hope |:(
I'd like to state that I'm a happy Ubuntu user, and have been using Unity for the past few years and have generally found it to be a good ui for the desktop. I know many many others fall in the same bucket.
The Ubuntu phone thing was a nice idea, but obviously never going to fly, so it's nice to see canonical putting their efforts where their actual users are.
I ended up dropping desktop Linux entirely last year, but Unity was the only thing that I thought wasn't matched by any other desktop operating system.
But it's also very sad. Canonical tried to bring the open source technology to consumer devices, failed, and never really looked like succeeding in the first place. Although they made the Linux desktop accessible to new large audiences (including myself), they too cannot make the next step in making the technology a real success on the general consumer market.
Ironically, their desktop popularity gave them large popularity amongst developers and sysadmins, which now allows them to have a great business case to sell to enterprises for cloud and IoT. The same guys that boosted the popularity of Linux with a desktop for everyone are now going to be making almost all their money on enterprise licenses for headless systems. Ugh.
I will be happy to see if the best parts of Unity make it back to GNOME - GNOME just feels clunky right now. But I will continue using Ubuntu regardless.
I feel like Ubuntu's Mir pushing was one of the big things holding back Wayland from widespread support and adoption.
I'd love to dump X without feeling like I'm shooting myself in the foot sometime in the next few years.
I was using Gnome shell on a old eeepc. The interface is dumbed down and you have to install a bunch of extensions to make it as functional as Unity.
What was worse is that the launcher automatically triggered the search function, and that slowed down the pc to a crawl.I'm using KDE on it now and, even though less stable, it has a decent interface and it is surprisingly snappy.
I always thought Canonical wouldn't be around much longer, and these news seems to confirm that. I'd, as a long time Ubuntu user, miss them.
While I do understand their change of focus, after all someone needs to pay those developers somehow, and resources are limited, it kind of reminds me about Red-Hat's decision that enterprise was where money is, as they switched their focus away from desktop.
Cloud and IoT are basically where enterprise is going and where GNU/Linux is just part of the underlying services, but not the main product being sold.
Now that Ubuntu is going back to Gnome, although I may not switch back to it immediately, it will at least be in my mind as an option whenever I keep upgrading Linuxmint.
3 of them are now true. Karma.
Build a terminal-only window manager with firefox built in.
Why not just build it for server and desktop because that's what Ubuntu is best at, and not chase the latest fads?
I shared the Shuttleworth's view on convergence.
Yes, I realize a lot of Ubuntu users have already gotten used to it, and some may even prefer it, but I'm sure the majority of new users would feel confused and turned off by it. I would argue Ubuntu has to focus more on gaining Windows users than on keeping the hardcore Linux users.
I hope they can also align their release schedule with the Gnome team. I want to be on the latest desktop and get access to all the new features as they come out - not six months later.
Translation: "We don't really care that much about Linux on the desktop because it's not making us much $$$."
Your actions are a huge let down to the community. You caused a lot of strife with the move away from Gnome to Unity in 2010, and now you're ripping the rug out from under even more users after feeding us crap about convergence, etc. for the past 4-5 years (?). Nobody asked you to do any of that phone or convergence stuff, but peddling it like it's the next big thing and then pulling it without ever even really shipping a working product is just amateurish and make the whole FOSS community look bad.
You and FOSS companies like you - please stop. Next time you have an idea that's going to be the next "big thing" or a "world-changing paradigm shift", ship something before you start hyping it to the tech press. Ubuntu Touch/Unity 8 is just the latest in a long line of embarrassing open-source let downs that took enormous amounts of time and energy away from the community with absolutely nothing to show for it.
A disappointed ex-Ubuntu user
* They actually started deploying them in 2015, they're probably already hard at work on a new version!
* The TPU only operates on 8-bit integers (and 16-bit at half speed), whereas CPU/GPUs are 32-bit floating point. They point out in the discussion section that they did have an 8-bit CPU version of one of the benchmarks, and the TPU was ~3.5x faster.
* Used via TensorFlow.
* They don't really break out hardware vs hardware for each model, it seems like the TPU suffers a lot whenever there's a really large number of weights and layers that it must handle - but they don't break out the performance on each model individually, so it's hard to see whether the TPU offers an advantage over the GPU for arbitrary networks.
As an example, the ALVINN self-driving vehicle used several such arrays for it's on-board processing.
I'm not absolutely certain that this is the same, but it has the "smell" of it.
I'd much rather see a general purpose CPU that uses something like an array of many hundreds or thousands of fixed-point ALUs with local high speed ram for each core on-chip. Then program it in a parallel/matrix language like Octave or as a hybrid with the actor model from Erlang/Go. Basically give the developer full control over instructions and let the compiler and hardware perform those operations on many pieces of data at once. Like SIMD or VLIW without the pedantry and limitations of those instruction sets. If the developer wants to have a thousand realtime linuxes running Python, then the hardware will only stand in the way if it cant do that, and well be left relying on academics to advance the state of the art. We shouldnt exclude the many millions of developers who are interested in this stuff by forcing them to use notation that doesnt build on their existing contextual experience.
I think an environment where the developer doesnt have to worry about counting cores or optimizing interconnect/state transfer, and can run arbitrary programs, is the only way that well move forward. Nothing should stop us from devoting half the chip to gradient descent and the other half to genetic algorithms, or simply experiment with agents running as adversarial networks or cooperating in ant colony optimization. We should be able to start up and tear down algorithms borrowed from others to solve any problem at hand.
But not being able to have that freedom - in effect being stuck with the DSP approach taken by GPUs, is going to send us down yet another road to specialization and proprietary solutions that result in vendor lock-in. Ive said this many times before and Ill continue to say it as long as we arent seeing real general-purpose computing improving.
So they are telling us about inference hardware. Im much more curious about training hardware.
Most of us are probably better off building a few workstations at home with high-end cards. The hardware will be more efficient for the money. But if you're considering hiring someone to manage all your machines, power-efficiency and stability become more important than the performance/upfront $ ratio.
There's also FPGAs, but they tend to be much lower quality than the chips Intel or Nvidia put out so unless you know why you'd want them you don't need them.
-This is why the unlimited Netflix / Spotify model is so good. Sure, the content owners could get more revenue from the heaviest consumers with a pay-per-click model, but only at the expense of all the sparse users would rather just pay a monthly fee than have to think "is this click worth it?" every time they want to consume additional content.
-I'll predict that within the next few years, the major publishers will come together and form their own subscription company with a revenue share similar to Spotify. Not sure why it's taking so long, but that day can't come soon enough for me. Why are the publishers letting AdBlock Plus and other potential startups start a business that they could own?
Here's my thought process:
-People read tons of different publications.
-Publications generally prefer subscription fees to ad revenue
-People don't want to deal with micropayments
-People don't want to pay for (or manage) multiple subscriptions
-Giving away your product for free (purposely or with weak paywalls) and asking for donations is probably not a long term sustainable strategy.
If the WaPo had 1 $25 subscriber and the WSJ had 1 $25 subscriber, the total industry revenue is $50, but each consumer only gets half the content (although much of the daily news content is roughly identical). If the WSJ and WaPo shared subscribers, the consumers would get double the content while the industry costs would stay the same. When consumers see additional value for their subscription dollars, they are more likely to sign up, increasing the number of potential customers. The industry will lose the revenue of the big spenders, who subscribe to both WaPo and WSJ... but I don't think many of those are digital subscriptions, and I think that's likely to be offset by the torrent of new customers.
Maybe even make it a white list service that ask at the end of the month which sites you'd like to contribute and a notification if you have not black listed a site but keep saying you don't want to contribute.
You should be installing "uBlock origin".
There are sites that I go to 2-4 times a month (e.g. the Guardian, Wired, The Atlantic) that I'd love to support, but I'm not motivated enough to subscribe to the more expensive ones at $70/year (Guardian), $52/year (Wired, $1/week for the website with adblocker and it's separate from their print+tablet subscriptions), etc. The Atlantic at $24/year is a much more palatable option.
I sometimes feel bad about not being able to support them with ad revenues, but these days I regard even the better ad networks as unmarked minefields.
Would love to hear from other's about their experience with it.
I really recommend checking out uBlock Origin. It's a superior technical solution while not doing any shady deals or anything of that sort.
Having a fixed identity is not desirable from multiple perspectives. It allows others to create a personal bubble around you and feed you with controlled information.
When users can not be uniquely identified then Internet compares to the radio where you can be sure that every listener will receive the same information as you do. This means that users can not be individually deceived.
I can only hope they manage to get people into paying for content and finally giving us alternatives to the terrible Ad based business model
But it seems to me that BAT is the future.
Something, something, slippery slope?
Not condoning what's happening in Russia, just pointing out how prevalent similar practices are around the world.
>> A Russian court has ... sentenced >> the culprit to compulsory psychiatric care.
Putin has the ultimate safe space, no one is allowed to say or do anything bad about him. And NK's media reveres Kim Il-Sung as a god.
If you're writing a game which, like this one, does not run at a fixed guaranteed framerate, you should vary movement speed based on the time delta between frames, i.e. instead of `x += 2.5` on every frame, do `x += (2.5 * 60) * seconds_since_last_frame`.
The TypeScript definitions are pretty nice (at least for the last official 2.x release), so if you're interested in learning TypeScript it's a great way to start.
Life has gotten in the way so I've slowed down, but I kept track of the Phaser tutorials I followed at https://github.com/JamesSkemp/PhaserTutorials (and converted the later tutorials to TypeScript while I was following them).
I understand engines like Unity and Unreal can now target JS as a compile target, but most of the games I play are native.
I'm from south-west Australia (where these dolphins were observed) and grew up swimming along the coast, scuba and snorkelling to much delight.
Once, in a little sand hole near Yanchep, I was floating around on a particularly hot day, when I thought I saw a crayfish just lolling around on the ocean floor. Diving deeper to get a closer look, I realised it was just the head of the crayfish, long-since hollowed out, albeit with a bit of flesh still onboard.
I took an even closer look, and what I thought was a bit of weed attached to it was actually a long, slender tentacle, which I followed back to a minuscule hole in the rocky sand, and therein I spotted the rest of the octopus.
Delighted, I stuck around for a while to watch.
The octopus was fishing! It held out the cray corpse just a little, out in the currents, waiting patiently for the stupid little white fish to come and try to nail a bite!
When a particularly dumb little fish got its fangs in, BAM! Out came another tentacle, fast as light, and snatched that dumb fish into the hole. All the while, the original bait tentacle continued to just lull about, swinging and swaying, getting those dumb fish even closer and closer, ever more tantalising.
I stayed and watched it all day - it apparently had a nearly-insatiable hunger, or maybe it was just really enjoying itself. To my dying days, I'll never forget that clever little cephalopod ..
Now that it's become clear that the octopus is an intelligent creature capable of solving problems and making plans, I'm going to eat less sophisticated mollusks and leave octopus to the dolphins.
maybe it's just they like the taste? it's not like we don't go out of our own way to find some delicacies just because we are in the mood for them.
I know anthropomorphizing animals is a no-no these days, but trying to explain everything based on "nutritional value" seems a bit narrow minded...
Super dangerous and only possible when working together.
Perhaps one day, thousands of years from now, technologically advanced dolphins will one day watch us doing the same thing again, and they will have a similar article about us!
I can't imagine that a dolphin can throw an octopus leg at more than 15-20 mph, and at that kind of speed I wouldn't imagine damage to a soft bodied creature.
I would imagine a better technique to be to leave the octopus legs for 10 minutes for lack of bloodflow to kill it. Wonder why they don't do that? Too hard to defend a bleeding morsel of food from other predators?
Best line from the article :D
How I wish he had the human society uplift the Octopus instead of the expected mammals.
The article reveals that dolphins can be killed trying to eat octopus in a more conventional fashion, but I don't see any "risk" involved with their throwing method.
Out of a total budget of $445m:
~66% (292m) is distributed via direct grants to local public television and radio stations.
~17% (74m) goes to television programming grants.
~7% (30m) goes to radio programming.
~11% (59m) goes to system support and administration.
(I had to look him up, might be useful for other people as well.)
Which makes me think: it would be interesting to have a diff tool for videos, but it tries to approximate for content.
Customers might include all the students of Udacity, Coursera, EdX and Startupschool.
I really appreciate the clarity of this post. The author is building up the groundwork without skipping steps that may be obvious to many readers. I of course knew the purpose of a hash before reading the article, but some people don't - and that sentence clearly let those users know why the hash matters without making it less readable for knowledgeable readers.
Writing clarity matters.
It's not really as useful if you are serving your static assets from the same place as the HTML (and you always use HTTPS) but if you load your js/css on another server SRI can still provide some protection.
Say I have jQuery previously loaded a page that included jQuery from CDNJS and now I'm in China and another site tries to load jQuery from Google's CDN.
Currently that request would get blocked by the great firewall. But since the browser should know that this file matches one it has seen (and cached) before it should be able to just serve the cached file.
This could also save a network request even if I'm linking to a self-hosted file on my own servers if I include the hash.
Your stripe js, scary ad networks js, front-end analytics companies. SRI is really neat and helps protect yourself from these many 3rd parties being pwned.
If your HTML goes through a CDN (say, you use the full Cloudflare package), the CDN can of course just remove or modify these integrity attributes, or add new scripts altogether.
This doesn't make sense to me. Why shouldn't I be able to perform integrity checking on resources from non-CORS domains?
As I've told anyone who'll listen, I was shockedshocked!to find that, once they'd gotten past the preliminaries (first a virtual robot game like the OP, then standalone bots with onboard UI's resembling repurposed TI-80's), the "real" programming takes place on a PC using a barebones IDE and a hot new language called C. In case you haven't heard of it (it's so hard to keep up these days), C is a language invented by Dennis Ritchie to help six-year-olds learn computers through interactive play.
And how do the kids spend their forty-minutes-a-week? They type some lines of C, then compile it to target the bot, which appears (thanks to a virtual COM port driver) as COM3 or COM4 or COM something, as long as the bot is plugged in over USB. Then they go to the bot, unplug it, and, after a sequence of menu selections on its wonky touch UI, which usually requires the assistance of a pencil eraser, they may "run" the program, at which point, all going well, the motors actuate.
At this point my listener is gone, so I'm just walking down the street ranting, "There's so much wrong with this!" I go on to complain about how it's "insane" that children should be subjected to a development experience that we know is abysmally inadequate for the grown-ups who practice it for a living! (cf Bret Victor)
This is the GT program: they had to score 97th percentile on a standardized math or verbal test just to qualify, and only six of them were invited to the botball team. They can focus as well as anyone their age. But consider what we're asking of them! A typical interchange devolves as follows:
0. Great, ready to see something happen? Yeah, me too!
1. Wait, so one more thing: because of reasons, you need to type a semicolon at the end of every statement...
2. Oh, a semicolon is a... it's like a colon except it has...
3. A colon is like... anyway it doesn't matter, you need a semicolon...
4. It's right there on the keyboard next to L... just type it
5. No, don't hold shift, that was only for the parentheses...
6. The parentheses were for the... arguments...
7. Okay, you're not in the right place, you need to move the cursor...
8. The cursor... do you see the blinking line on the screen?... look, right there, where I'm pointing...
9. Yeah, you need to click exactly right there... you know what just use the arrow keys...
10. The arrow keys are over on the right at the bottom... use the up arrow, the up arrow... that one...
11. Keep pressing it, keep pressing it... you have to go through the whitespace...
12. Whitespace is what we call the part of the program that's invisible, but it's there...
13. Oh no, out of time! See you next week, try not to forget about semicolons in the meantime! There are servos on this thing just awaiting your command!
Look, Dennis, I have nothing against your footgunI learned it after BASIC when I was twelve like all the other kids and I turned out just fine. But, as Alan Kay said in a Q&A, "I hate to break it to you, but C is not a high-level language." [citation needed. I think it was at a Quaalcom event.]
And don't even get me started on the feedback loop, officer.
Anyway, thanks for posting this! One of the other parents asked me what her child could do at home to practice, and I'm looking for resources just like this!
Should it be "move the robot to the chest" or something instead of "the metal block?"
It was something else, at least back then. (I do remember thinking it would be really hard to play without having someone understanding at least basic japanese around.)
The programming screen is shown briefly here:
I found this video from this web page, where there's more text and images: (Chrome with Google Translate works okayish.)
How much time in the class did you spend working on the competitions vs just doing rote exercises?
Time has shown that Red Hat really knows what they're doing, on many different levels, and they're the most successful free software business in history.
"Red Hat also commits to keeping our commercial products 100% pure open source. Even when we acquire a proprietary software company, we commit to releasing all of its code as open source."
I had no idea this was the case, pretty cool!
But I've read that text, and have been utterly converted to RH. RH is the ultimate good, go with them, and accept their demands.
A great company, culture and a great place to work.
I get it though. Anyone know how Centos under Red Hat is nowadays?
2015/46 - visual and tactile bodysuits enable advancement in personal Virtual Realities, which begin to take the market share from TV, radio, films, and other media.
2017/48 - first universal operational machine 'Harvey' constructed by a team lead by Peter Shor at Bell labs, with funding from IBM, Lycos, RedHat and Pepsi.
That bit from the timeline was probably written around 1999-2001 or so. Kind of amazing that out of the tech companies mentioned, RedHat is the one that's still doing pretty well.
Are people regularly using ZFS-on-linux with redhat ? Or is there an officially sanctioned CoW choice from redhat ?
Or is that not on the radar at all right now ?
Love Redhat's OSS work.
Please build officially Red Hat backed Fedora Tables. This will be a win to win situation for both sides.
There are a lot of talented people working at this company, without a doubt. But these are the same people that brought you systemd and SATELLITE. Has anyone here ever USED Satellite (6.x) ?!
Get this off my news feed.
I do have to say when I and my ex broke it off, reading that first conversation logged in my Facebook chat between the two of us was a total bitch to swallow. Everything was there. Every single word. Nothing's faded into distant memory. There we were 2 years ago happy that we've met each other. Here we are now - complete strangers to each other. It is definitely a weird feeling.
Gabriel Garcia Marquez, eat your heart out.
She mentions in the article that "I came to the attention of a media storm after being struck by a tragedy. My life imploded, and between grieving and dealing with media controversy, my days became a sickening tragicomedy I couldnt turn off."
Could it have been this?
> Norton dated Aaron Swartz for three years. Articles in The Atlantic and in New York Magazine indicate that she was pressured by prosecutors to offer information or testimony that could be used against Swartz, but that she denied having information that supported prosecutors' claims of criminal intentions on Swartz's part. Prosecutors nevertheless attempted to use a public blog post on Swartz's blog that Norton mentioned, which may or may not have been co-authored by Swartz, as proof of a criminal intent.
I have some long-term online friends. Mostly totally anonymous. But none romantic. I don't even for sure know gender for some of them. It doesn't really matter.
OTR is useful if you go to great lengths to exchange public keys, and as soon as the key changes you go through all of it again. (I don't really count shared secrets as a secure means of authenticating your key, since if you have the shared secret, the key can be substituted and thus is irrelevant)
That's probably fine if you're just chatting aimlessly and don't need to rely on secure communication regularly. But it's a pain in the ass if you wanted to rely on it for remote long-term secure communication. "Privacy" is about all it's useful for (assuming more attacks aren't found in the protocol).
(Side note: to defeat all this complicated encryption and expose identities, just become a member of the hacker community. They're quite gossipy)
Little known fact: A single miner has close to 65% or more mining power on Namecoin. Reported in this USENIX ATC'16 paper: https://www.usenix.org/node/196209. Due to this reason some other projects have stopped using Namecoin.
I'm curious what the ZeroNet developers think about this issue and how has their experience been so far with Namecoin.
*I am referring to concentrated power of the big players here, country-wide firewalls, and bureaucracy towards how/what we use.
* 2 years out of date gevent-websocket
* Year old Python-RSA, which included some worrying security bugs in that time. (Vulnerable to side-channel attacks on decryption and signing.)
* PyElliptic is both out of date, and actually an unmaintained library. But it's okay, it's just the OpenSSL library!
* 2 years out of date Pybitcointools, with just a few bug fixes around confirmation things are actually signed correctly.
* A year out of date pyasn1, which is the type library. Not as big a deal, but covers some constraint verification bugs. 
* opensslVerify is actually up to date! That's new! And exciting!
* CoffeeScript is a few versions out of date. 1.10 vs the current 1.12, which includes moving away from methods deprecated in NodeJS, problems with managing paths under Windows and compiler enhancements. Not as big a deal, but something that shouldn't be happening.
Then of course, we have the open issues that should be high on the security scope, but don't get a lot of attention.
* Disable insecure SSL cryptos 
* Signing fail if Thumbs.db exist 
* ZeroNet fails to notice broken Tor hidden services connection 
* ZeroNet returns 500 server error when received truncated referrer  (XSS issues)
* port TorManager.py to python-stem  i.e. Stop using out of date, unsupported libraries.
I gave up investigating at this point. Doubtless there's more to find.
As long as:
a) The author/s continues to use out-dated, unsupported libraries by directly copying them into the git repository, rather than using any sort of package management.
b) The author/s continue to simply pass security problems on to the end user
... ZeroNet is unfit for use.
As simple as that.
People have tried to help. I tried to help before the project got as expansive as it is.
But then, and now, there is little or no interest in actually fixing the problems.
ZeroNet is an interesting idea, implemented poorly.
Given the long history of vulnerabilities in the the browsers, trusting js from a well-known website might be OK, trusting js from zeronet is unreasonable.
If ZeroNet could run with js code generated only by the local daemon or without js it would be brilliant.
How does this track with the Tor Project's advice to avoid using BitTorrent over Tor ? I can imagine that a savvy project is developed with awareness of what the problems are and works around them, but I don't see it addressed.
I haven't looked deep into any of these projects, but I do think they are neat and hoping at least one of them gains a lot of traction.
I cannot help but feel disappointed and unamused.
Huh? What do they mean?
 hashrates can't be compared directly due to different hashing algorithms having different costs for producing a hash.
Even more crazy, the James-Stein Estimator which does this actually uses data about the football player and soccer player to make predictions about the baseball player, (and vice-versa). This is deeply unintuitive to most people since the players aren't related to each other at all. The phenomenon only holds with at least three players; it doesn't work for two.
(More generally, Stein's Paradox is the fact that if you have p >= 3 independent Gaussians with a known variance, you can do better in estimating their p-dimensional mean than just using their sample means).
I've spent a bunch of time trying to understand why this actually works ; to be honest I still don't deeply understand. But nonetheless the consensus is that the same shrinkage phenomenon is what causes improved performance for a variety of high-dimensional estimators, (lasso or ridge regression, e.g.), making the paradox very very influential.
 https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator https://www.naftaliharris.com/blog/steinviz/
"Mr. Jones has 2 children. What is the probability he has a girl if he has a boy born on Tuesday?" Somehow knowing the day of the week the boy was born changes the result. It's completely bizarre.
Suppose you're on a game show, and you're given the choice of three doors:
Behind one door is a car; behind the others, goats.
You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat.
He then says to you, "Do you want to pick door No. 2?"
Is it to your advantage to switch your choice?
 - https://en.wikipedia.org/wiki/Monty_Hall_problem
What does it mean that tossing a fair coin has a 50 % probability of showing heads?
If you think you know the answer, you are probably wrong.
EDIT: Instead of just voting this down, try to give an answer. If you think it is easy, you have not thought about it careful enough.
It was really disappointing. Maybe it was just that specific hackathon? I knew "the pitch" was a big part of winning, but didn't expect it to be 99%. A few teams built some actual cool and functioning things in 24 hours. Ours was the only app with a real demo you could visit/download, rather than it only working locally on the dev's machine. But the only teams that reached the finals were the ones with great speakers and a good idea, even if it wasn't functional at all.
The guy I know ended up winning the first prize at that event. His team had an amazing idea and a great pitch, but their "app" was a powerpoint presentation, and a couple static HTML pages that faked a dynamic user flow. It worked as long as you clicked the correct image map area, and entered values into text boxes that matched what was hard-coded into the next page.
I had fun with my team and made new friends, but it was really discouraging to lose a "hackathon" mostly because the other team had better speakers and not for technical reasons. I guess I prefer game jams, no qualms about marketability or a pitch with empty promises. Maybe it's because game jams usually don't have financial prizes aside for the exposure?
Where is the engineering rigor? Where is the quality assurance? Are all the edge cases handled? Security review / penetration testing? What is the performance when you serve 10K users simultaneously? Can one actually build a business out of the "product" of a hackathon, or does it basically need to be thrown away and re-architected properly? Is it even a goal to build something viable, or is it the whole event simply a way to spend corporate marketing money on buzz?
> "It was easy to see hackathons with 50k plus prizes 2 years ago. Now days some of them only offer Alexa as prizes. The college kids will do it regardless since they don't have other stuff to do, but once you get a job that is a bit too much work for very little gain."
This reminds me a few years back when Greylock hosted a hackathon with an unintentionally ironic prize of a "Hacker Cash-omatic": http://valleywag.gawker.com/hackathon-accidentally-picks-per...
The same hackathon now offers $10k for first place, round trip airfare for second place...and Myo Armbands for 3rd.
I also know that there's a group of people who attend as many hackathons as possible in the bay area for no other reason than living is expensive, and there is generally free food at the hackathons.
Honestly, I'd be a part of one of these again in a heartbeat -- the hours were crazy, but, it was a great way to get your developers, your API consumers, and everyone else in the same room, to validate if it will burst/scale up, and you can solve a lot of really interesting challenges that weekend.
But now most hackathons are just implementing features with technical debt, which is the kind of engineering nobody likes.
Then, cheating in hackathons is incredibly hard to detect. Even if you force everyone to use version control and review what they did over time, there can always be the cooking show trick where suddenly you pull a finished important part from nowhere.
That's why I think hackathons should be engineering events, organized by engineers not the regular company hierarchy, and rewards should be given on engineering value not how much a feature can potentially sell.
(Animation: From Cartoons to the User Interface)
It was fascinating. The Disney Principles of animation went beyond 'looks good', and existed to solve real problems like helping the audience follow the action.
They can be useful for hiding transitions for things that are slow. However instead of engineering an animation stack, make the backend faster....
cough linkedin, salesforce, workday cough
Scrolling is useful. Every other kind of animation is painful to serious users.
If anything, UI changes should blink for a second to show that they happened. This (and every other notification+delay) is a boon to novice users who don't yet know 'where to look' to get feedback about their change.
All delays are annoying to experts who just want to get to the next screen and don't need a supercomputer in their pocket to send emails & read proquest.
- Horlicks is a form of gruel, but they don't use the term in their marketing because of the negative connotations from Dickens.
- Horlicks was supposedly invented to make milk easier to digest for the lactose intolerant. I have no idea if there is a sound scientific basis for that, but the fact it's no longer marketed that way suggests not.
Also, the uses mentioned in TFA both make sense. Horlicks is just milk with added malt, both of which are full of calories (good in the morning) and caffeine free (good in the evening). Milk also contains small amounts of melatonin and tryptophan, both of which promote sleep, but the amounts are so small the effect is negligible.
 There seems to be some evidence that a high carbohydrate breakfast is not a good thing at all, so don't take this as a recommendation.
As a child i read the James Herriot memoirs (all things bright and beautiful, etc..), in which he waxes rhapsodic about the restorative powers and deliciousness of Bovril.
I had no idea what it was, so I was free to invent my own idea of what it might taste like, or be, or consist of.
When i discovered he was so happy about diluted beef stock, i was more than a little let down.
> "In fact, many of the areas that will generate the most growth in future are currently unfamiliar in the West, according to management consultancy McKinsey."
Given that more and more of the co's in YC's recent batches are exclusively India focused, I am definitely on board with the next big unicorn being born in India and China right now.
I always thought it was odd that two such similar drinks were marketed in such contrary ways. Horlicks to make your sleep and Ovaltine to give you energy. Perhaps that was the point at which I realised marketeers weren't always scientific in their advice :(
Seems to me they could do with more marketing, any marketing.
The third way is to recognize that it is essentially milk, and milk has been consumed both before bed and upon waking millennia before modern marketing.
2. Grid extents use the "one more than end" convention instead of "length", which is sorta confusing. But then they call it "end", which is even more so.
3. grid-area's four arguments are, in order (using normal cartesian conventions to show how insane this is): y0 / x0 / y1 / x1. Has any API anywhere ever tried to specify a rectangle like this?
This made me uncomfortable though (about CSS grid, not about the game):
grid-area: row start / column start / row end / column end;
So you have to put the rows (Y axis coordinates) first and columns (X axis coordinates) second, i.e. the opposite of how it's done in every other situation - i.e. draw_rect(start_x, start_y, end_x, end_y)
(1, 1, 3, 4) in every other language would draw a box 2 wide and 3 high, but in css grid it selects an area 3 wide and 2 high.
Also the fact it uses 'row' and 'column' to describe the gridlines rather than the actual rows and columns irked me.
I'm sure I'll get over it!
Is Chrome 56 so outdates that this grid box doesn't work with it?
Or, perhaps, does the game only check if I'm running the "latest" version, regardless of which browsers do or do not actually work with Grid Garden?
Edit: Oh, wow, 56 is that outdated. Talk about cutting edge technology?
But (and nothing against the author of the game)...
I'm going to jump on the bandwagon here of others wondering just what the person or committee who thought up the API was smoking when they came up with it?
At first, it made kinda sense. Nothing too troubling.
But the deeper it went, the less it made sense. I don't have a problem with 1 vs 0 indexing (because I started coding in old-school BASIC back in the dinosaur days of microcomputing - so that doesn't bother me much).
It's just that the rest of the API seems arbitrary, or random, or maybe ad-hoc. Like there were 10 developers working on the task of implementing this, but with no overall design document to guide them on how the thing worked.
I'm really not sure why there's two (or three? or four?) different ways to express the same idea of a "span" of row or column cells, based on left or right indexing, or a span argument, or...???
Seriously - the whole thing feels so arbitrary, so inconsistent. This API has to be among the worst we have seen in the CSS world (not sure - I am not a CSS expert by any means). I can easily see this API leading to mistakes in the future by developers and designers.
We'll also probably see a bazillion different shims, libraries, pre-compilers, template systems, whatever - all working on the same goal of trying to fix or normalize it in some manner to make it consistent. Unfortunately, all of these will be at odds with one another.
I'm sure JQuery will have something to fix it (if not already). Bootstrap too.
The dumb thing is that had this been designed in a more sane fashion, such hacks wouldn't be needed.
Anybody have an explanation for this surprising behavior?
I would never use CSS grid to do what this game is asking me to, so even though it helps me learn the syntax and properties, it's not helping me learn how it's going to be applied to an actual website.
edit: scratch that I was confusing css grid for flexbox, my browser does not support css grid yet.
Ever heard of this thing called "contrast"? Could use some.
One more excuse not to use HTML tables in our toolbox
Rule#1 of GUI every geometry manager will reinvent Tk/Tcl poorly saying it is crap.
Oh well, maybe in 5 years time I can make use of this.
Ha, I found it. Thanks Internet Archive.
Not sure how it was accomplished, but is another very nice touch to an impressive display.
Would love to hear the story behind its development. That kinda CSS would be pretty tedious to write.
Another example of how the general trend towards containerization vs. automated installation is excellent for reliable deployment. For server software, I was in the Ansible camp for a long, long time until I tried my first Docker-based deploy and realized that a whole swath of uncertainty about "documentation notwithstanding, can I trust this to deploy given the current state of the deploy target" just disappeared. For robotics this is even more important, because unless you're building a drone swarm, robots aren't generally in a redundant self-adapting cluster :)
Regarding why Ubuntu Core is a better fit than Docker for this, http://askubuntu.com/questions/808524/whats-the-main-differe... seems to shed some light. You want your "container" to have full access to the (robotic) system but still benefit from immutable dependencies.
I found this a more convincing approach: https://en.wikipedia.org/wiki/G%C3%B6del%27s_ontological_pro...
And whenever their examples all spring from a consistent worldview (e.g. they all debunk liberal ideas or conservative ideas or religious ideas or anti-religious ideas), it raises my eyebrows, because it makes me suspect that they're more interested in defending their worldview than they are in teaching logic.
But when someone presents example arguments that are inconsistent with one another, perhaps debunking an anti-religious argument first, then debunking a religious argument, then I know that they're primarily concerned with teaching, rather than indoctrinating.
I won't say which camp I think Prof. Hjek falls into with this Aeon piece, but let's just consider this suspicion of consistent-worldview examples another tool in the philosopher's toolkit of critical thinking.
Nick Beckstead once told me informally that new philosophical ideas can usually be classified as one of these: arguments (in the sense of formal logic, with premises and conclusions), examples (e.g., intuition pumps), and distinctions (e.g., normative vs. positive statements).
Short, with a lot of punch: Philosophical Devices: Proofs, Probabilities, Possibilities, and Sets.
A much more detailed and introductory book: The philosopher's toolkit : a compendium of philosophical concepts and methods.
Only if you define "better" in those terms. I'm sure that Romeo and Juliet would have preferred a world in which they remained alive as lovers -- but would that be a better story?
These tips are generally about logic. Logic is necessary but not sufficient to derive truth from the world. Philosophy is the search for knowledge, which is much broader and deeper than logic.
Meaning, purpose, ethics, knowledge, and truth (to name a few) can't be picked apart or pieced together with proofs and syllogisms.
Is being caught with a gram of pot a good reason to not have a job? Because in states where it is (or even, was, but is not now) illegal its often a felony. I have friends who live in legal states with MJ posession charges from when it wasnt legal. Still haunting them... seems like bullshit all around...
On top of that, here's a fun little story. There is a guy, born a month after me to the day, in the next closest hospital to where I was born, with the exact same first, middle and last name as me. In 2003, he got in a bar fight that spilled out into a parking lot, during the fight his pants where torn (not taken off) in a way that exposed his genitals. He was charged with a sexual crime and is now a sex offender.
You wanna know how often that comes up on low level background checks for me? You know how much it sucks having to preempt a BGC with a future employer with "you might see a thing that says im a sex offender, here's a nice PDF i've made explaining the issue..."
The thing about this that makes it suck so hard, is not only are background checks bleeding other people's information into mine... but the guy is a sex offender for no reason at all... I feel bad for the dude.
Thats why this system sucks, regardless...
Given all of the negative press about Uber lately, you'd think they'd give some more thought before publicly defending the right of sex offenders, people with suspended licenses, and people with multiple serious driving offenses to work as taxi drivers.
A previously suspended license flunks the background check? I've had mine suspended for failing to pay a parking ticket that was never mailed to me. Kind of a lame reason to never be able to drive for lyft or uber.
I learned this one the hard way a few months ago. We ran into a flexbox bug in one browser which we worked around by adding some-rule: 0.0000001px instead of 0px. However, our minifier collapsed that using scientific notation, which triggered a rendering issue in a different browser due to the out-of-spec CSS. The whole adventure left me feeling like I'd travelled back in time.
Wow. #1 on HN. Wow.
I'd usually hang around a bit more, but I'm really tired. I posted this past my midnight. 00:51 now, and I'm fading fast.
Thanks for all the love, everyone. I'll come over tomorrow (12 hours from now, or so) to answer any questions or to pick up any corrections.
There used to be a bug with flex-wrap: wrap; where an element would wrap to the next line while it should have fit. You could fix it by instead using width: 25%; use width: 24.999999%; so it would be 25% on the screen but it would fix the problem so it didn't wrap to the next line. So you should look out with this.
Unfortunately every time I read something about minifiers I got the feeling that people are optimizing the wrong problem.
If you gzip data over the line it's already compressed. So minifying your stuff will only help you a little.
A more interesting problem to solve, I think, is that of optimising CSS rules for browser rendering.
It's very interesting, however, that no one minifier is a consistent winner in these test cases, and that running CSS through multiple minifiers is actually, potentially, not all that crazy. (The very debatable real value in doing that notwithstanding.)
What I'm curious about is if this has any kind of negative impact on performance, bandwidth, etc. Because the CSS is loaded on the component level, and because Webpack 2 does tree shaking, the page will be guaranteed to only contain CSS for the components that are on the page. And if I'd 'lazy-load' parts of the app, I'd get that benefit for my CSS as well with no extra effort.
On the other hand, any benefits of having a compiled (and hopefully cached) bundle.css are offset by the need for an extra request for the css file, as well as the very likely situation that there'll be a bunch of unused css in that bundle.
Am I missing some drawback to the above-mentioned approach?
Also didn't know one could use counters already. Browser support is great. I thought it was still under approval.
Amazing stuff, thanks
Remy: I'd suggest posting a CV and linking to it from this post. I looked and couldn't find one anywhere on your site; you'll get a lot more qualified interest if people can find out more about you than just a few blog posts.