I'm so amazed that this is a thing.
I'd just drop the ping to google entirely. If wget/curl/fetch fail, so be it. No need to introduce additional slowness for the small chance internet isn't working.
Until then I had used bash exclusively for years. Now I will absolutely never go back.
$ llbin | nlines 738
EDIT: ...oh, it sends your data to a public service, which is not what I would prefer. But the cleverness of the service (rendering the pixels via ASCII) seems like it would make a great local utility.
Another EDIT: the encoding lib is LGPL, support for ASCII rendering contributed by Ralf Ertzinger -- https://github.com/fukuchi/libqrencode
And for Pandora, there is Pianobar, a console based pandora player: https://github.com/PromyLOPh/pianobar
(Note: it's mostly an educational tool; for example, you could enable an option to make climate print the actual command before executing so that you can learn your way around using the shell effectively.)
I've been keeping my bash scrips in github gists but I like this organization as well.
But there are some tools which are compute related and I'd recommend everyone try once:
1) cheat. Total lifesaver. I used to backup my histories but I no longer do so since most commands I use have an example there.
2) qrify looks good, not sure how often i'll use.
About crypt: I'd suggest installing openssl and using tools in there. Crypto is hard to get right, not to dissuade anyone from trying to create but as an end user, always use something widely used.
: https://medium.com/@tjholowaychuk/blueprints-for-up-1-5f8197...: https://github.com/apex/up
So far I'm seeing this as useful for side projects at most. The layer isn't opaque enough where I can troubleshoot serious issues and its too new to know there won't be any. More layers add more complexity in the early stages, over time though this could get quite compelling.
But it is a really cool thing.
True. However, in the case of Google Cloud Platform, the Protocol Buffer definitions for their services have been open-sourced here: https://github.com/googleapis/googleapis
There are two immediate benefits/possibilities: you can generate your own gRPC client libraries for GCP services and you can re-implement GCP services using their open-sourced interface definition. One example would be the Google Cloud Functions Emulator , which implements the service defined in the Cloud Functions service' Protocol Buffer . You could deploy that Emulator somewhere for a sort-of "dev" version of the production Cloud Functions service, and the Gcloud SDK could talk to it.
: https://github.com/GoogleCloudPlatform/cloud-functions-emula...: https://github.com/googleapis/googleapis/blob/master/google/...
The intersection of Crouch St. and SE P St. in Bentonville, AR. Crouch and pee, everyone. My grandma lived on the corner there thirty years ago.
It does not exist, or maybe it exists, but only in superposition: https://www.crossing.us/intersections/Heisenberg/Schr%C3%B6d...
Sadly, non computable: https://www.crossing.us/intersections/Turing/Church
Computable, but only by partial Integration: https://www.crossing.us/intersections/Hamilton/Lagrange
For friends of modern art: https://www.crossing.us/intersections/Pablo/Salvador
So thanks for the easy points!
(It's actually the intersection of "Don" and "Moscow". But judging from the other street names here, "Don" was probably intended to be the river in Russia.)
Who would have thought that only one would exist in...Memphis.
Nearby Antonio and Oso Parkway was no problem, though.
Then again, it doesn't have a problem with the San in "San Antonio Avenue"
Sapir/Whorf - 0
Deming/Kruger - 0
3) Have you considered using something like this as a boilerplate utility for actual lawyers to improve their throughput for creating privacy policies?
4) Have you considered a similar utility for site terms of service? Is the problem different in magnitude?
5) Are privacy policies legally onerous if you do them in multiple languages? Seems like translation arbitrage would be disadvantageous.
Maybe time to blow the dust off my PHP binary... Great work!
+ 37 Signals/Basecamp
+ Zip Recruiter
+ Outcome Health
Job Platform ZipRecruiter Takes Its First Outside Funding, $63M Led By IVPhttps://techcrunch.com/2014/08/26/job-platform-ziprecruiter-...
Disclaimer: I work for ZipRecruiter.
I remember buying a TV stand from TVStands.com about a decade ago.
If this isn't readable on a recent Android phone with a decent net connection then who are these articles for anyway?
PowerPC, and really, most non-x86 architectures, do this one way or another.
Which brings me to padding. I wonder what percentage of memory of the average 64-bit user's system is padding? I'm afraid of the answer. The heroes of yesteryear could've coded miracles in the ignored spaces in our data.
This sounds like it could be really exciting. Is there anywhere I can find out more?
Specifically, I've been struggling to find an appropriate backend for Server-Sent Events, could this feature help with that?
Most (definitely not all) of the mis-alignment problems were in the network stack, and were centered around the fact that ethernet headers are 14 bytes, while nearly all other protocols had headers that were a multiple of at least 4 bytes.
I've said it before, and I'll say it again: If I had a time machine, I would not kill Hitler. I'd go back to the 70s and make the ethernet header be 16 bytes long, rather than 14.
> As an international website building platform, obtaining an ICP license for China is very important to our users. The actual process of obtaining an ICP license though is quite complex. With Alibaba Clouds built-in and easy-to-follow ICP application process, it has helped with our user experience a lot.
Seems like it's killer feature is China ICP license made easy.
- "based on the instance rental fee"- "Tell us what you think about this page and win $10 credit! "- "Instance Fee, Storage fee and Public Traffic fee"
example : your product displays news. Some of these might considered not acceptable by the Chinese govt and cause you to get shut down or blocked
Tencent Cloud: https://www.qcloud.com/?lang=enBaidu Cloud (Chinese only): https://cloud.baidu.com/Netease/163 Cloud (Chinese only): https://www.163yun.com/
I use Tencent Cloud for a small China-oriented SaaS. The SDK APIs are kind of a mess/lacking, but the service is otherwise pretty reliable and easy to use.
Could anyone explain the sudden excitement about their service?
Also, back of the napkin math, but GCE is even cheaper.
Alibaba Cloud ($79.00/mo) 2 Core CPU 8GB Memory 80GB SSD Google Compute Engine - Taiwan Region ($69.81/mo) n1-standard-2 (2 vCPUs / 7.5 GB Memory) 80 GB SSD disk
(Note: I work on Terraform)
So far, you can't really claim that you've ever designed a global platform because your stuff clearly doesn't work in mainland China. Think about it - 95% of all US services you can think of does _not_ work there, google.com/GCE/most AWS/golang.org/docker etc. For $30/year you get a chance to battle the GFW and the ability to build something truly work in all major markets.
I've been wondering for a while
* does it mean if I use boto3 (python library for AWS), but with a different enpoint (which I know can be overrided as we do this for our CI tests) and only do basic operations (put content/get content) I do not have to switch to an other library ? * The comparison does not mention things like presigned url (in order to share private content for a limited amount of time), what is the situation on it for OSS? * Does Aliyun engineer works on closing the gap ?
Wonder what OLAP features it providers above the managed & massively-parallel SQL like in BigQuery
Disclosure: I work for Alibaba Cloud. Drop me an email (in my profile) if you're interested in the opportunities. Yes, we have office in Seattle (Bellevue).
But now, there is no need.
If you want to create new cloud I would rather shoot for cheaper egress as this may give you an edge in many data transfer intensive applications.
This doesn't look like a serious contender with AWS.
Sounds strange to me.
tl;dr it's pretty good, if you know AWS this'll be OK, their support is competent.
I like the subscription model, especially if it has a free tier, because passionate customers can support the developer team and subsidize less passionate users
The same principle is still used in consumer video encoding (and all but the highest-end professional video), where's it's described as eg. 4:2:2 or 4:2:0, the numbers describing how many pixels' worth of chroma (colour) data are provided for each block of luma pixels.
They're more directly referring to the backmasked message on Pink Floyd's Empty Spaces, which goes: "Congratulations. You have just discovered the secret message. Please send your answer to Old Pink, care of the Funny Farm, Chalfont."
Then, after a few minutes:
"Hammurabi, I beg to report..."
I happen to have just opened a PR to support indexed PNGs in node-canvas (HTML Canvas impl for node) based roughly on the proposed pixelFormat API [1,2] for the same reason that the space savings can be immense.
It's also straightforward to change the palette of an indexed PNG after it's encoded because you don't need to re-compress the body. This lets you do cool tricks with recoloring images.
 https://github.com/Automattic/node-canvas/pull/935 https://github.com/WICG/canvas-color-space/blob/master/Canva...
I'm curious if it would've been simpler to use it, albeit less efficient.
Pretty sure that only runs in Chrome anymore.
Invoking Betteridge's law of headlines, I'm guessing not?
Consider the Mongol empire. While the Mongol empire had some things in common with a modern state, squeamishness about killing was not one of them. There are cases where they decided a city needed to be destroyed and they took great pains to make sure that destruction was total. Every man, woman, and child they could find was put to the sword (or axe). Depopulating an entire city was a difficult task before carpet bombing became an option. Mongol soldiers were given quotas and expected to produce enough ears to show that they had met that quota. Punishments for not meeting that quota were harsh. Ears were put into sacks and carted off in wagons to be counted. Even dogs, cats and chickens were killed in some cases. There are recorded instances of Mongol armies leaving towns after doing this and then deliberately returning a week or two later, just to make sure they got anyone who had managed to hide in a basement or who was out of town when they were first there.
Consider the Romans. After the third Punic war Carthage's population was sold into slavery en masse and the city burned for 17 days. The earth was salted. There was no fourth Punic war because there were no Carthaginians left alive and free. Let's not even talk about the Assyrians!
Many states throughout history have committed genocide against enemies and many others have persecuted populations within their own borders. Concentration camps are only necessary now because most people won't stand for such atrocities, and states therefore feel compelled to carry them out in relative secrecy. Modern human society is gentler now than at any point in our past, although perhaps navel-gazing and self-accusatory articles like this are part of the reason why, no matter how uninformed they might be.
And for me at least, the main take-away from the image of the concentration camp is clear: elections have consequences, so take your vote seriously. Form a functional coalition with others to elect the least terrible person you can. If you don't, you may end up handing the machinery of the state - capable of harm on a massive scale - over to the unsavory or the insane. That's the lesson of the concentration camps and the lesson of recent history.
So I think the distinction between the nature of the state and the nature of its citizens is important for effective democratic participation. If you are so cynical that you think that 'concentration camps reveal the nature of the modern state,' why would you participate in such a thing? If we are collectively down on the very concept of the state, there's no way to run a good one.
"The scores are based on a combination of the popularity of the book and relevance to the topic. The best possible score is 100 and the worst is 0."
The website is a guided repository of Python books and currently lists the best 100 books. It classifies these books into fine grained categories and shows the best books in each category. It has filters for Python version, free and non free books etc. For the beginner book section, you can even filter the books by topics that you want to learn.
This is prolly there best list of Python books I've seen. Great job!
Are for experienced programmers new to Python, which is a shame.
I am in the market for a cloud GPU offering, and I have to say the big cloud providers are very uncompetitive here, only offering these old, slow GPUs.
How is the quality compared to x264 with the default settings (preset medium, crf 23)?
It makes me think that we're training the wrong people in college by making CS a very difficult, math heavy field which often causes the more human skilled people to drop out. Programming doesn't have to be anymore math heavy than building a house yet we force undergrads to implement algorithms on paper? The amount of wasted potential talent due to college is staggering.
In the consulting world, we call this job "enterprise architecture". It does, in fact, pay very well: it requires someone with both a sharp business mind and comprehensive technical skills, and those are very difficult to find in one person. I personally am more of a "jack of all trades" type; but you can be a successful architect by focusing on specific technologies as well.
I honestly find that it's easier to take someone who's a hacker type, and teach them the business. You look at the business itself as a large, complex system and model your application development around that. But you also have to be a good enough technologist yourself so that you can tell your dev team when their designs don't match up to the business problem (this is a common problem when requirements are not clearly communicated).
A good architect is the person who understands both the business context and the technology implementation. You don't have to be in-the-weeds building the product, but often you do have to build quick POCs to prove out an approach before handing off the designs to development - so being able to code is a necessity IMO.
We started with a simple problem that plagues HR departments in every conceivable industry with unions, finding substitute personnel and erroneously assumed that it was a simple fix. Over the past year and a half we have accumulated a great deal of knowledge after interacting with as many people as possible and have finally released a version that meets our original criteria (and much more). It was obviously not a simple fix.
If I have one thing to tell anyone who is looking for business ideas to try out their new programming skills on, I strongly suggest taking the time to learn as much as possible about the people to whom you want to provide a solution, then recruiting one of them to help you build it, lest you become another project that solves a non-issue beautifully.
- Here's a pool of knowledge about software development: hardware, operating systems, memory, disks, file formats, databases, networks, protocols, languages, debuggers, design patterns, security, accessibility, UI/UX, distributed systems, paradigms, typical algorithms & data structures, and CS problems
- There's a pool of knowledge about whatever industry you get into as a developer: user demands, existing workflows, existing infrastructure, previous decisions, legal regulation & compliance, physical laws, profitability, and practical limits.
Your software development skills should reach a point where you don't write "Bad Code" -- anything that's wrong like loading a entire database table that eats memory when you can read individual rows, storing passwords in cleartext, or not doing anything for accessibility (this is not design pattern, space/tab debates). These have been done hundreds of times by new and 'experienced' people.
It takes time to get to this point. More time than anyone likes to admit because the pool of knowledge grows and shrinks daily, but has undoubtedly had a net expansion since computers were a thing.
It takes time to get deep knowledge about whatever industry you get into. This is different for every industry. There's a practical minimum that you need to work on solutions or do maintenance on software within this industry. This is to avoid "Bad Code" which will hurt you, other people, or your business.
You can gain industry knowledge by just being given problems and being shown. This is probably how most of us know our industries from the get-go. A minority of us came from those industries and transitioned to programming later, so we already had a base level of knowledge of our problems.
If I've got the definition of Deep Context right from this article, it means to get to that point, you have to spend a good amount of time within the industry. It's not something you can gain completely by reading out of a book.
If you're to gain deep context within an industry, you have to devote some time away from software. You can't do both at the same instant (but certainly within a day). When you study an industry, there's an opportunity cost to not learning something new about software and vice versa.
When you add more requirements to a single job, it increases the time we have to spend before we're employable. Not every industry changes as fast as software does, but some certainly do, possibly catalyzed by software.
If you increase the time requirements, it's going to reduce the available pool of engineers as long as all of the engineers are honest and don't apply for jobs or remote contracts until they're ready.
If you don't want the time requirements to increase, you have pay the opportunity costs from one of your pools of knowledge.
So really, we need a much better "good enough" for employing developers and career development, including teaching software and industry knowledge. Because eventually the time requirements are going to become steeper and steeper. It can't go up forever.
But that doesn't mean that he's correct that the core issue are the restrictions put on banks 5 years ago. That is so self-serving. Remember the banks got bailed out after the recession; but the people lost their homes. More money has flowed to the top during this time period, making lives harder for average Americans.
It's hard to be sympathetic to Mr. Dimon.
Its nest had fallen from a tree in a storm - we found it a day or so later in a not great state. But within a day of feeding it was hand tame and happily came to live in the house. By the end of the summer though it learnt the skills it needed and flew off.
While it lived with us it quickly learnt the places we were happy for it to be in the house, where its 'bed' was and when we would feed it. It continually picked up small toys to play with and would delight in dropping things off the table for us to fetch and return for it.
In some ways I was so sad when it left, and as a child regretted not clipping its wings - but I'm pleased we didn't and was hopeful it went off to lead a normal magpie life ;)
Researchers trained ravens to exchange some tokens for food. Presented with a choice of 15 different objects, majority of the birds picked the food tokens and stashed them for up to 17 hours until an opportunity for exchange came again.
The author believes this passes as the pinnacle of a 4 year old's planning abilities.
What if ravens just act accordingly to environmental cues instead?
It's not going to go well under our new Raven Overlords.
My only hope is to taste like chicken fingers and therefore die quickly.
I understand that this is what they want. They want to drive executives' interest in the product, but I believe they do so at the expense of their goodwill with the tech community.
Am I the only one who cringes when these ads air?
Edit: "magic beans" is harsh and it isn't that I don't think their tools are good. My point is that they put you in a position where it seems very unlikely to meet expectations.
Well, can't say I'm surprised. I used to work on that project a few years ago, basically the idea was that Watson would look at a patient's medical record, figure out what medications they're on, what symptoms they had, etc. and cross-reference all that with the medical knowledge it had ingested from vast amounts of medical literature. In theory, Watson could figure out what medications the patient should or should not be using, a proper course of treatment, etc.
There were two major problems:
First, it turns out your medical record is mostly written in narrative form, i.e., "John Smith is a 45 year old male...", "Patient is taking X mg of Y twice daily", "Patient was administered X ml of Y on 3/1/2016", etc. In other words, there's basically no structured data, so just figuring out the patient's stats, vitals, medications, and treatment dosages was an adventure in NLP. All that stuff was written in sentence form, and of course how things were written depended on who wrote it in the first place. It was really, really hard to make sure Watson actually had correct information about the patient in the first place.
Second, all that medical literature that was being ingested? Regular old, don't-know-anything-about-medicine programmers were the ones writing the rules the manipulating the data extracted via NLP. Well guess what, if you're not a domain expert you're bound to get things wrong.
Put those two things together and we would frequently get recommendations that were wildly incorrect, but that's to be expected when you get garbage input being fed into algorithms written by people who aren't domain experts.
It was a classroom nightmare. WIFI not working, Bluemix required for all workshops not working at that time, teachers very new on the topic themselves (one confessed he only knew Watson for a couple of weeks before the training), no announcement, no nice moment to socialize or build up a community, no coupon given to try on our own after, ...
And... the algorithms didn't work at all. The sentiment analysis was classifying as really positive the sentence: "I wasn't happy at all by the service" due to 'happy' and 'all' present in the sentence.
Disclaimer: SWE at Watson Health
I made several rounds around all of the stalls, and sat at the bar for a couple of hours with friends, and the whole time I could see the IBM stall, with 4-5 people there, WATSON plastered everywhere and nobody talking to them.
So I went over. I got talking to one of their technical people there,
I am highly experienced in Deep Learning so I started talking about Neural Nets, and he went blank, and admitted he didn't know much about that. I inquired about WATSON's technology and he couldn't answer telling me he didn't know.
I asked about the main use cases, and what makes WATSONs offering better than Deep Learning, he couldn't answer, or even compare on basic levels.
I asked him "What are the coolest uses of WATSON you've seen" and he immediatly went into a canned response about WATSON diagnosing cancer (a project I had seen and was familiar with) we spoke a few minutes on that, and I asked what other cool projects WATSON had been used on ... he had nothing, and I mean literally nothing.
It's almost like 2016-17 were gold-mine years for marketing buzzwords and some companies are closing deals with no real execution plan for what they're selling.
As a non-expert, it seems like the top end researchers are working for Google(Hinton, Bengio, etc), Facebook(LeCun), Baidu, Uber (ex CMU faculty). I don't really see a lot of machine learning research coming out of IBM comparable to the others.
IBM seems to running on the fumes of it's previous greatness while burning the ship to generate stock market returns.
PM me for more ;]
I am genuinely curious...
I have a comp sci degree and worked in different industries relating to software and have never even seen or touched any IBM tech except for those old cash registers.
Umm, so add them? And Nvidia, Intel, Baidu, Uber, Tesla? Anybody else? That single chart would actually be more interesting than the entirety of this article.
They brought in a sysadmin after they got up to 800 OS instances. Before that, it was just 3 part-time researchers handling the system administration duties.
I have seen a lot of negative press on Watson, but really, it can be evaluated like any other API to see if it meets your needs.
+1 "dog shit wrapped in cat shit" .. that is awesome.
Circling the drain.
AI in a nutshell.
Photography, on the other hand, is a very common hobby in the tech community. And the comments here seem to reflect that this effort strikes a little close to home: Those pictures are lousy, if you find them appealing you have no taste! Just because they're 'professional' doesn't mean they're good! Machines cant replace human judgment, they have no soul! I bet that machine had a lot of human help!
Tech people may tell you great stories about meritocracy and reason, but in the end we are just emotional monkeys. Like the rest of humanity.
Those of us who can accept this may at least aspire to be wise monkeys.
The method proposed in the paper(https://arxiv.org/abs/1707.03491) is mimicing a photographer's work: From taking the picture(image composition) to post-processing(traditional filter like HDR, Saturation. But also GAN powered local brightness editing).In the end it also picks the best photos(Aesthetic ranking)
Selected comments from professional photographers at the end of paper is very informative. There's also a showcase of model created photos in http://google.github.io/creatism
[Disclaimer: I'm the second author of the paper]
1. The diagonal lines in the clouds and the bright tree trunk at the extreme right of the first image are distractions that don't support the general aesthetic.
2. The bright linear object impinging on the right edge of the cow image and the bright patch of the partial face of the mountain on the extreme left. Probably the gravel at the left too since it does not really support the central theme.
3. The big black lump that obscures the 'corner' where the midground mountain meets the ground plane in the house image.
4. The minimal snow on the peaks in the snow capped mountain image is more documenting a crime scene than creating interest. I mean technically, yes there is snow and the claim that there was snow would probably stand up in a court of law, but it's not very interesting snow.
For me, it's the attention to detail that separates better than average snapshots from professional art. Or to put it another way, these are not the grade of images that a professional photographer would put in their portfolio. Even if they would get lots of likes on Facebook.
Again, it's an interesting project and a significant accomplishment. I just don't think the criteria by which images are being judged professional are adequate.
It's interesting to see algorithms catching up to being able to replicate this. However when you mention these kind of abilities to photographers, they get defensive, almost like you are threatening their identity by saying a computer can do it.
I hope that one day our driverless cars will alert us when there is a pretty view (or a rainbow) so we take a moment to look up from our phones. Every route can be a scenic route if you have an artistic eye.
The model has the reverse situation, of course: it cannot perfectly guess the emotional response for any one person, but it has access to a larger assortment of data.
In addition, in different contexts it may be easier/cheaper to place a machine vs. a human in a certain locale to get a picture.
If my theorizing makes any sense, it suggests that this technology would be useful in contexts where: the locale is hard to reach and the topic is likely to evoke a wide variety of emotional responses.
So what? Maybe I missed it, but what are some potentially meaningful applications of this technology? What motivated this to begin with? Or are these questions that we even bother asking anymore?
I remember the first time someone showed me the Snapchat app -- it would make them look like a cartoon dog, or all these other real-time overlays. I thought, 'jesus, so glad we're all getting advanced computer science degrees so we can work on utterly useless shit like this...'
Saw a few people talking about retouching and studio work - I do a lot of studio shoots and retouching on my own, and would be happy to help or participate in projects. Feel free to reach out.