Neural networks and deep learning are truly awesome technologies.
If you're also interested in real time OCR like this, I did a write up  of the approach that worked well for my project. It only needed to recognize Scrabble fonts, but it could be extended to more fonts by using more training examples.
I took a few screen shots. Aligning the phone, focus, light, shadows on the small menu font was difficult. You must keep steady. Sadly, I ended up hitting the volume control on this best example. Tasty cockroaches! Ha! http://imgur.com/j9iRaY0
The new fad for using the 'deep' learning buzzword annoys me though. It seems so meaningless. What makes one kind of neural net 'deep' and are all the other ones suddenly 'shallow' ?
How come? If the valuation for Team is 25% more than a Solo founder is clearly better than a Team.
A team is 2+ founders. Which means your shares are divided by 2+. Solo is clearly winning here even if the total valuation is less.
e.g. on comparing companies that do better. You could have a data set of 150 companies whose exit performance (or current value) looks something like this (numbers in millions):
5000,1000,500,400,200,200,100,50,50,30,30,20,20,20,15,15,15,15,0,0,0,0,0,0,0 (repeated 6x to get 150 data points)
Now compare that against a data set that is those exact numbers divided in half:
2500,500,250,200,100,100,50,25,25,15,15,10,10,10,7.5,7.5,7.5,7.5,0,0,0,0,0,0,0 (repeated 6x to get 150 data points)
If you compare these data sets with a Two Sample T-Test you have to go down to 91% confidence to get a significant result (http://www.evanmiller.org/ab-testing/t-test.html#!307.2/986....)
That may not sound that bad, but now add a super-unicorn to each one of those data sets, a $20B exit. Now the differences aren't even significant at 80% confidence.
e.g. in Item 7 about technical co-founders: "consumer companies with at least one technical co-founder underperform completely non-technical teams by 31%"
Lets say that First Round has 150 consumer businesses and we're just going to look at a binary outcome of something like "valued over $50m". Now lets say that 100 of these consumer companies have technical co-founders and 50 are completely non-technical. Say 40% of the non-technical teams are "successful" by the $50m metric. That means that 30.5% of the technical teams are successful (if they are doing 31% worse by the numbers in the article since 40%/1.31 = 30.5%). That's not a significant result at 80% confidence (http://www.evanmiller.org/ab-testing/chi-squared.html#!31/10...)
I understand why they published the piece and think it will get a lot of reads, but really wish I could read a version with statistically relevant insights instead.
Ivy League School and working at a prestigious company? You don't get either of those by being a slacker.
Younger team, woman co-founder and more than one founder? You better believe there is going to be more pressure to prove yourself and not sell early or give up (vs. being a single founder or an older proven founder).
Standing out from the crowd at demo day or getting noticed out of all the noise of social media? That takes some dedication. I guarantee that the people who did get noticed that way didn't just send one email or one tweet. They were hustling their idea hard.
Great read though. I loved the point that startups don't have to come from SF or NYC to be successful!
Female founders outperforming male teams: My hunch would be that the bar for women to get funded (at least historically) has been higher than men so the female led start-ups would be a better calibre of company. Related, since this is based on investment performance, could it be that the female founders received smaller initial investments so performing on par with male teams would make the ROI look better?
Halo effect: This to me would indicate that we shouldn't be encouraging fresh college graduates to work at start-ups and instead get experience at a more mature company. I wonder how much tenure they had at their halo company prior to founding the start-up and how it ties with the average age of founding.
Solo founders perform worse: I wonder what happens if you frame this from the point of view of the founder. If the solo founder had a $100 return and the team had a $260 (160% better) return; assuming equal dilution and equal division between founders, solo founder get's $100, a two founder team get $130 each (30% better), a three founder team gets $85 (15% worse).
Next big thing from anywhere: Also interesting, I'd like to see how this varies by referral source. Do companies referred by other investors perform better than non-investor referrals (or can other investors pick companies better than social connections).
Technical co-founder, enterprise product: +230
Elite school: +220
ex AMZN, AAPL, FB, GOOG, MSFT, TWTR employee: +160
Female founder: +128
Discovered investment via non-traditional VC channel: +58
Technical co-founder, consumer product: +31
Team average age under 25: +30
Solo founder: -163
I'd love to see an inverted analysis of this effect, ie. which schools had the best indication of success. Pre-deciding to look at their definition of "top schools" is probably only seeing part of the picture.
I think that probably explains the "no tech cofounders do better" bias in Consumer; the bar is probably higher there.
I'm not convinced about the age conclusion: depending on which statistics you focus on, you either conclude that 25 is best or 32 is best:
Founding teams with an average age under 25 (when we invested) perform nearly 30% above average [...] for our top 10 investments the average age was 31.9
"The results were stark: Teams with more than one founder outperformed solo founders by a whopping 163% and solo founders' seed valuations were 25% less than teams with more than one founder."
How many of the 300 investments in their portfolio were solo founders? 10?
Solo founders are rare, and it's often harder to raise money as a solo founder. That means less companies have solo founders to begin with.
Source: I am a solo founder
Everything else is secondary to finding the home runs. Even if you multiply all those advantages together you get:
1.63 * 1.3 * 2.2 * 1.6 * 1.5 * 1.63 * 2.3 * 1.58 = 66
So if you manage to somehow get every single one of those attributes at max in a company you'll get roughly 66x the valuation or performance or whatever versus a company/team with none of them.
Of course if you go for all those things you'll probably only get one deal per year.
This isn't terribly meaningful.
User Content. We collect your personal information contained in any User Content you create, share, store, or submit to or through the Service, which may include photo or other image files, written comments, information, data, text, scripts, graphics, code, and other interactive features generated, provided, or otherwise made accessible to you and other Users by Blockspring via the Service in accordance with your personal settings.
Case in point, one of the example scripts in the blog post (https://open.blockspring.com/pkpp1233/get-amazon-new-price-b... ) requires you to input a Amazon Product Access Key and a Amazon Product Secret Key as parameters.
I'd want to know who else in the company is using this data? Who has used it on the past? Have they done work that is similar or even a duplication of the work I'm doing?
These information management issues are currently hidden, but result in lost productivity. Just this past month my friend at Google found out another person had already done his analysis and he could learn from the previous work. Just knowing someone had previously used the same dataset could have saved him 7 weeks of work.
In fact, how about implementing a Logo turtle graphics system :)?
How do you scroll to zoom? Down arrow and Page Up/Down do nothing, and there's no scroll bar.
It's kind of interesting that moving while turning at a constant rate in the hyperbolic plane makes you gradually "drift". Is that actually true, or is it an artifact of the software?
The "perk" of contributing would be that you would get access to all of the expert witness prepared statements and legal work, so if a patent troll comes after you next, you would have a lot of your defense work already done for you. Plus, once the patent troll looses a case, especially on appeal, that decision can be used as precedent.
If anyone wants to google them and can't find anything (like me), that's because the name is Soverain, not Sovereign.
EFF and NYT ran full reports on him previously.
To achieve a decisive victory in these cases, Newegg typically has to take the defense of its case through a full trial and possibly an appeal.
People often fail to appreciate just how risky a trial can be. We stand on the sidelines and laugh at how absurd this or that flaky patent appears. And yet - and yet - the law itself went through a phase in which such patents were almost routinely granted. Standards may have tightened over time but, still, a patent claim in a hotly litigated case will not survive to trial unless it has been able to withstand a host of pretrial challenges by which a defendant has already asked a court to rule that the patent, as a matter of law, should not stand. It is only when a court tosses the patent claim in the pretrial phases that a defendant avoids the risk of a potentially absurdly high verdict after trial. If the claim survives such challenges, then the defendant has no choice but to settle or to play it out through trial while incurring just a risk of having a large verdict entered against it. This is the point at which most defendants - even large, deep-pocket defendants who can otherwise afford to pay the costs of defense - will fold. Newegg, on the other hand, has made the tough decisions, incurred the major risks, and largely managed to defeat such patents on the merits.
In doing so, it incurs the very large costs of defense typical in such cases. And it has the guts to take the potential liability risks of going through full trials to take the cases to verdict.
Large, institutional defendants have occasionally (though rarely) adopted such policies in the past. For example, over decades, GM adopted a policy of never settling injury claims if its own experts had determined that the GM autos were not at fault. In doing this, it would often incur defense costs that far exceeded the value of the claim being defended. But it did so to send a firm message to the plaintiff's bar that prosecuted such claims - that is, "if you want to sue GM, your case had better have merit - you will get no nuisance settlement from us."
Newegg effectively is delivering the same message but with an important twist. If GM successfully defended a particular injury claim, that ended the case for that claimant but had no preclusive effect on other, similar claims. If Newegg successfully defends and defeats a patent claim by having the patent declared invalid, the law of what the lawyers call "res judicata" (meaning, "a matter adjudged") kicks in and kills that patent off forever.
So, not only does Newegg take out the garbage, it makes sure it won't accumulate ever again.
This is a true public service for which we all must tip out hats.
Companies that are still using those should be sued for not securing their consumers' information properly. Failing that, this would be an even better way of achieving the same thing.
- Why is this happening in the first place?
- Who is this entity that grants a loose patent?
- Why isn't this entity being interrogated ?
I suspect this is actually referring to #1 and #2 in the next list, not the list in which this item appears. It is confusing.
I don't want to call you out but in https://c9.io/blog/content/images/2015/07/recursion.png you're really only running C9 in C9 - the rest have the same URL.
I've never tried the nested stuff, or would even think of it, but it sounds pretty cool. Hope to see more improvements and features in the future.
We need more of this to counter the MBA/Business closed allocation model.
Is this a "(our lawyers made us put this in)" sentence?
It's not like there is a .ilgl file type, and with 1 time downloads DCMA takedowns are unlikely.
I still use it today for sending files here and there. :)
https://github.com/alfg/dropdot - Source with demo.
Q: "Why should I trust you?"A: "Because you should! We're good people! Honest!"
I'd love to trust a service like this, but there's no credible effort to actually establish that trust.
> several graves and tombstones, as well as mortuary tablets,> were discovered in the old foundations. In the chancel, lying> with its head to the north, was an iron tablet, probably> formerly a cenotaph, once embossed with inlaid brasses,> now missing.
'Studies and scans showed that the box was made of non-English silver, and originated in continental Europe many decades before it reached Jamestown.
Horn said he believed it was a sacred, public reliquary, as opposed to a private item, because it contained so many pieces of bone.'
'There are no plans to open it.'
The article links to https://badssl.com/, which shows a list of links to good and bad configurations. Calomel gives more details about what is right and wrong, and sometimes surprises with its rating.
The Qualys SSLlabs scan does not accept an IP address. I'm often in the situation where the cert is installed and ready, but the name is not yet pointing to the new IP address. The above URL can verify that you haven't left out the intermediate cert.
It also gives a summary grade. Very few sites are 10/10 (I only remember github having this grade)
`openssl x509 -in $FILE -text | less`
The tool performs a similar function to sslscan, THCSSLCheck and sslyze, but differs by crafting part of the SSL handshake instead of using an SSL library to establish a full connection. [...] Libraries either become outdated and therefore incapable of testing for new protocols such as TLSv1.2 or exotic cipher suites; or they are updated and lose support for older protocols namely SSLv2.
Support for SSL testing over SMTP (STARTTLS), RDP and FTP (AUTH SSL)
Source for self-hosting:
(edit: inb4 kneejerk about sourceforge)
Secondly, over the course of my 16 years of software development, I've found that when a I get treated less of a human (e.g. boss is barking orders, denied vacation, etc), that most of the time, these issues are rarely directly confronted. These confrontations create mental roadblocks in my head; all of the sudden I'm less productive/creative, lethargic, generally less excited about my work.
When a boss cracks the whip for employees to "work harder, faster!", it results in these subtle, mental withdrawals that are hard to pin down but are definitely costing the company money. Internal resentment results in a subconscious "fuck you" that most bosses may not even realize is occurring.
This example of TCO adds to that.
This is a great reason to run your own Tor relay, even if it's just a private bridge for you and your friends. You can even use a pluggable transport of your choosing -- I picked obfs4 to make otherwise identifiable Tor connections look like random noise to my service provider.
This could've been the title of many, many articles about Tor. You'll see plenty more. It has so many past attacking, due to difficulty of its goal, that it should only be used as one step in a series of anonymity-enhancing methods.
That said, the fact that this research can identify what hidden service a user accesses skirts that line. On the other hand, that doesn't sound too different from a typical timing attack (something that Tor doesn't try to prevent).
Edit: glancing through the paper, it may also require monitoring the webpage, in which case it's less severe. I'll wait until there's a good writeup by the Tor Project itself.
From the paper
>Indeed, we show that we can correctly determinewhich of the 50 monitored pages the client is visitingwith 88% true positive rate and false positive rate aslow as 2.9%
Would love your feedback and of course I'm happy to answer any questions!
I feel that if a subset of this processor had been all that was introduced, that it could have been successful.
The majority of the penalty for making a function call being negated? Wonderful! Heck it sounds like (although I skimmed the later half of the article and I didn't fully grok the first half on my single read through) the stack doesn't even need to be touched for some chains of function calls.
But there is a lot of work for the compiler here, wow. Knowing the maximum number of registers that is needed for any function call made within a function? Ouch.
Support for multiple return values is cool though. That'd be incredibly nice.
And again, rotating the registers to avoid hitting the stack, incredibly powerful.
Having that many globals accessible, also really powerful. All of a sudden the penalty for accessing your "God" object just went down by a fair bit.
Part 1: https://news.ycombinator.com/item?id=9955652
Part 2: https://news.ycombinator.com/item?id=9961506
The flip side of this is that in the StrengthsFinder personality assessment, it lists "Responsibility" as my top strength. I'm a man of my word who has never reneged on a promise or failed to pay back a debt, even if I have had to struggle or sacrifice to do so, and consequently I have near perfect credit and am financially relatively well off compared to the average situation for someone in my age bracket in the US.
While it's useful and good to seek to classify things into quantifiable buckets of data, it's also important not to lose sight of the fact that people are not easily quantifiable, and that any attempt to segment people into classifications will inevitably treat someone unfairly or misclassify them because they somehow differ from the typical set.
This is such bullshit. I had a prepaid phone for a few months, because my previous one was broken, and the new iPhone was going to be out in a few months. So I moved to Verizon prepaid: $40/mo, 2GB data, no taxes/fees/nickles/dimes. It worked great. Best part is: I paid VZ $150 for a used iPhone 4s; and traded it in for $200 credit for a new iPhone. When the new iPhone came out, I switched to it and a postpaid plan. And I have stellar credit.
The point is: just giving up a prepaid phone by itself means nothing. GIGO.
Looking at other factors + feelings = trouble
These folks have already been rejected from traditional financing. There is a fine line between those who barely didn't qualify (but should) and really didn't qualify (and shouldn't). Where do you draw that line?
Far as character, there's an old legend where JP Morgan was asked by Congress on what basis he lends out money. He reply was the person's character, not ability to repay. His reasoning in the story was that people of poor character, but having the money, would make up any excuse to avoid paying. Whereas people of good character would do everything they could to get the money and pay up. Realistically, ability to pay is a huge consideration but the story's lesson about character was wise. Interesting seeing it in action and automated to a degree.
Technology is really going to advance once we have anything that comes close to human level on NER and relation extraction. Kind of like self driving cars, the basic ideas have been around for decades, but performance in realistic adverse conditions remains awful.
The main idea is to think of stores as reducers (redux = reducers + flux). He also gave a very good talk on it at react-europe: https://www.youtube.com/watch?v=xsSnOQynTHs
In my experience, the toughest thing to grasp about Flux was how to handle async server actions. It's something that a lot of tutorials (including, unfortunately, this graphic) handwaves, but it's one of the first things you need to nail down if you want to do anything exciting in an SPA.
The todos usually make it seem like you have to to have this flow of information:
ActionCreator -> ApiUtil -> ActionCreator -> Store
...but if you use the same action creator, you end up with a circular dependency. So you actually have to create a separate file of ServerActionCreators, that are only called with ApiUtils:
ViewActionCreator -> ApiUtil -> ServerActionCreator -> Store
This seems like a lot of boilerplate. At my job, we've simplified this a lot by using Reflux, which has async actions that run one Store callback when a call gets initiated, and another related one when it gets completed. But it's not ideal.
Personally, I'd rather see a bigger app than a TodoMVC implementation with a "correct" example of async server actions.
If anyone knows about a flux implementation that solves this - I'd love to hear about it!
Specifically, I'm not sure what problem the Dispatcher is actually solving. It seems to just add boilerplate and indirection for little benefit.
Dan Roam's Back of a Napkin is a good guide to thinking and communicating in this visual way. You can get quite a long way with the freebies on his site, but it's worth shelling out for the whole book.
When I studied the diagram, I found myself running through an imaginary scenario in my head. The Overview text makes it hard to do this. If you don't create a diagram and want to explain a process clearly, don't explain the components -- follow an example action through the process, and your readers will grasp it much better.
One thing missing from many flux tutorials and sample applications that I would love to see included is error handling. How do you manage errors in your flux applications? Do you keep them in the store? If using something like react-router, how do you ensure you flush errors from the store as your routes change and they are no longer applicable to the data in view?
One of the big things I think a lot of React/Flux tutorials miss out on are the "Smart and Dumb" components. This was the missing "view controller" that I am used to with MVC and the flowchart illustrates it nicely.
That first tweet succinctly explains flux in less than 140 characters.
At the end of the day, pretty much every react "best practice" I've seen converges on approaches that raynos/mercury offer out-of-the-box.
React's decision to allow local state inside components was an "original sin" and every single engineers that decides to author a library to make React/Flux simpler is really just engineering around the poor decision of allowing local state.
Have of those should, might, could update if they felt like it react class props are unnecessary complexity that had to be bolted on to overcome issues with local state.
Here's a better approach:
(1) components that are pure functions that take in state that they should render. state hasn't changed? don't call the function
(2) components that have state that needs to be tracked can export a function that produces an instance of the state that component consumes
(3) compose the state of the UI, by calling the state instantiating function of all the components in your UI. Nest as appropriate.
(4) Make sure all these state instances are bijective lenses that keep one source of truth for a certain state value. Everywhere that state is needed or could be modified receives that "cursor" via dependency injection. The simplest example demonstrating a state cursor is the raynos/observ library.
The composed state object is the waist of your app hourglass, just like IP is the waist of the Internet. All I/O modifies state and that state propagates from there. Anything rendered to the screen is the O in I/O. Any events coming from your mouse or other peripheral is the I in I/O. Any syncing via XHR or websockets can be the I and the O in I/O. All I/O flows to the state.
Anything that needs to react to changes in that state have two options: subscribe to state change events (push) or read the current state as necessary (pull).
This really isn't all that hard and React/Flux has overcomplicated things immensely and given it a flashy name. The myriad libraries that purport to make it easier and simpler are just overcompensating for something that fundamentally needs to be re-engineered, but won't because it breaks backwards compatibility and requires people to move state logic (read: business logic) out of the component classes (that probably shouldn't have been there in the first place).
Yet another case of late stage private capital.
seriously though, twillio is a great product, cheers to them!
I'm probably buying 3 houses in 3 different states as I write this.
I realize that perfect security is fantasy, but the practices of many of these organizations don't pass the laugh test. We'd be vastly better off if they would hire a security professional and listen to her.
I worry that the clamor about personal information exposure is going to be used to motivate restrictive and ultimately ineffectual government action and, perhaps, kill the goose that has been laying the golden eggs.