If doing anything slightly intensive, my 15" MBPR is the same; I've had it hover at 100-105C for periods. Why not scale down the CPU at crazy temperatures? Intel CPUs have a cutout, but the entire machine needs a more gradual solution. Running at 90C+ is not viable long term.
Now it's over 2 years old, I might just buy a entry level MBP instead rather than maxing out as I usually do. They seem fast enough now and what's the point if I can't even use the full power?
My main complaint about Swift is the lack of available documentation. In C languages, you have static header files which you can read and learn about APIs & types / classes / whatever. API documentation in Swift is generated on the fly when you search for a specific term that happens to be a function / type / protocol / whatever. But how do I find out what functions / methods are applicable to, e.g. a String? There is no chance to just `grep` existing headers for 'String' or some such trick, to find everything related to that type. This may seem minor to some, but for me it is a major stumbling block making Swift effectively a black box for me.
Objective-C isn't going anywhere, just like C++ hasn't gone anywhere, nor has C left the building either.
There are 124 public frameworks in the 10.10 sdk, 357 private ones. While a good chunk of these are written in Objective-C, a good deal of them are written in C, or Objective-C with the guts in C++. AVFoundation, for example, is mostly C++ in the "backend".
The amount of effort it would take to move those to Swift would be fairly substantial, and with no real gain. This doesn't mean that new frameworks won't be written in Swift, but still interoperable with Objective-C, this will probably happen - though I doubt it'll happen in my development lifetime. So we are looking at a gradual replacement of frameworks, or superseding with newer ones (QuickTime -> AVFoundation -> ???).
He brings up Carbon, but erroneously as Cocoa actually predates Carbon and Carbon was only meant to serve as a compatibility bridge from OS 9 to OS X. Carbon was meant to die.
I don't plan to switch to Swift unless I absolutely have to. I spend 14 hours a day developing for OS X, but my problem isn't Objective-C, it's the ambiguous documentation and mildly temperamental behavior of certain frameworks coughAVFoundationcough. I didn't need Swift, wasn't looking for Swift. I have nothing against it, I'm sure it's awesome, but language isn't my issue at this point, so I don't really have much to gain by it.
Has Apple recently released anything that they have dog-fed to themselves before, and that was not buggy? Swift itself is a good example of how Apple seems to work now: Let engineers build a toy, release it as v1.0, wait for the early adopters on Twitter to sing their praises, then maybe start using it internally. Maybe.
I wish Apple had instead designed better frameworks for UI and persistence and then built a language to make working with them easier.
Yes, Carbon has been replaced in the past but that involved a $400 million acquisition and 10 years of continual complaining, kicking and screaming from established Carbon users who had no desire to change. I doubt it's an example of how future changes will occur.
Replacing entire application frameworks is hard. Super, super hard. It seems like it might be simple to start by replacing Foundation with Swift's standard library but actually, replacing Foundation would mean replacing every Cocoa framework since they all rely on it. And there's a gigantic amount of work in those frameworks; 25 years of development (all the way back to the early NeXT days).
I think this is why Swift includes such extensive support for Objective-C interoperation: Apple expect Swift will need to link against Objective-C APIs for a long, long time.
I think we're much more likely to see a major deprecation sweep through Cocoa in one or two years time (probably once Swift finally has all the features Apple have hinted are coming). Not deleting things, per se, but simply saying "these things look ugly or silly in Swift" so use these other things instead.
An example with 1:n relationships: CoreData returns and expects an (untyped) NSOrderedSet. Now I may either keep the NSOrderedSet, but have to cast each object I want to use - and my Swift code is just as bloated as Obj C would be:
let obj = mySet as MyClass
let obj = mySet
Or I create a typed Array from the NSOrderedSet, which is fine to work with in Swift - except that it is not managed by CoreData anymore. So I'd have to be careful to synchronize with CoreData manually, and it's not just saving that I have to watch out for, but also other operations like rollback etc.
Another example is NSNumber. NSNumber needs to be manually mapped to Int, while the other direction works automatically. That makes sense when it is unknown whether NSNumber is an Integer or a Floating Point Number, but in CoreData I have specified that it's an Integer 32... (Well, I think it was similar with Obj C, actually).
So, working with Swift in the Playground felt like a huge improvement over Obj C at first, but then working with some Cocoa APIs it started to feel ... more clunky again.
I think it was/is similar with Scala. If you can stay in the Scala libraries, awesome. It is a huge improvement over Java. But Scala won't automagically turn a terrible Java API in a great Scala API. Yes, you'll save a couple of semicolons and type declarations, but it's not that big of an improvement. Thankfully (in Scala), there are a lot of better frameworks or wrappers for existing frameworks by now. I guess Swift will have to go the same way...
It's impossible to know for sure, but when you look back 15 years ago when Cocoa was the new and shiny to the old-trusty Carbon. Apple was actually writing software in Cocoa internally. Everything they were building towards extended from the NeXT Objective-C world. Though true, they didn't publicly commit to Cocoa 100% until OS X 10.4, but you better believe internally they were always in. It's just the world didn't see it until the "We're rewriting the Finder in Cocoa" campaign was announced.
At Apple right now, no one outside of the compiler team is working on anything interesting in Swift. It's still locked away from them. To be fair, it's an evolving language and will cause a lot of heartache for everyone until the language has been baked in more.
I know Mattt is excited for Swift. Plus, a lot of developers are already doing some really cool stuff. So we shall see in a couple years how the story plays out.
Apple doubted JSON would be around?
"One of the truly clever design choices for Swift's String is the internal use of encoding-independent Unicode characters, with exposed "views" to specific encodings:
A collection of UTF-8 code units (accessed with the strings utf8 property)
A collection of UTF-16 code units (accessed with the strings utf16 property)
A collection of 21-bit Unicode scalar values, equivalent to the strings UTF-32 encoding form (accessed with the string's unicodeScalars property)"
Here's what I mean:
var s = [Int: Void]() s = () s = () s = nil assert(s != nil) assert(s == nil) assert(s == nil)
Adding Swift to Safari would be an interesting development
A good example is string interpolation. Swift makes it unnecessary to have things like `stringWithFormat` in framework level.
The first generation computer graphics languages were vector-oriented to support either oscilloscopes or pen-plotters.
My first color frame buffer terminal was 512 x 512 x 8bits $30K AED in 1980. I think this costs than a dollar on a low-end cellphone now.
I used to be mesmerized by it..everything looked so clean and modern, much better then the crappy 80x25 "workstation" displays of the VAX's and PDP-11's.
That effect actually looks amazing. I'd totally play a game with that aesthetic.
Is there a reason for it? What kind of circuit design in the DAC could cause low frequencies/significant distortion to require a high-pass at these frequency ranges?
Genuine curiosity here.
I can only try to imagine how it would look on a oscilloscope with more MHz and a better soundcard.
I've got an old hobbyist oscilloscope, it was my former boss and mentor's first scope. I really should hack up something with it along with some of these low cost cpu boards.
I wonder if it could be emulated with a shader...
I can say that I am never more proud of our police forces than when I see them maintain their control and treat people with respect in the face of provocation. That ability to say in command of oneself even when provoked is a core part of maturity and something to be lauded. As he said, it's a sign of a professional.
The role of the police is to prevent violence. That's what makes a community safe. It sounds like the letter-writer, on the other hand, is looking for an agent of his or her frustration. Someone to lash out at the protestors because he or she legally can't. A community where the police can attack people with impunity is about as far from safe as one can get. It seems somewhat obvious, but a safe community is one where no one is attacking anyone.
I've heard this sentiment before, but never so well written.
This was part of a complaint email sent to him:
>>I wanted to send you this email to express my frustration and outrage at how the situation of these protesters is being handled in Nashville. The first night protesters marched here after the incidents in Ferguson they never should have been allowed to shut down the interstate. Instead of at least threatening to arrest them, they were served coffee and hot chocolate.
This is how you deal with protests. Good job Nashville police; much respect. My own frustration lies with whoever sent this complaint.
It is only when we go outside that comfort zone, and subject ourselves to the discomfort of considering thoughts we don't agree with, that we can make an informed judgment on any matter. We can still disagree and maintain our opinions, but we can now do so knowing that the issue has been given consideration from all four sides. Or, if we truly give fair consideration to all points of view, we may need to swallow our pride and amend our original thoughts."
Granted there are times when they are left with no choice, but if there is a strong relationship established between the protesters and the police, such that the protesters believe that the police understand their concerns, how much less likely does this become?
Peelian Principles  explicitly say that the police force is not representing the government, as that's the job of the military. Firstly, the police force are citizens in uniform and part of the local community. The police are explicitly in place so the military can stay out of the community.
Militarised police forces around the world would do well to keep in mind that they are making themselves redundant, since if the police are indistinguishable from the military, they might as well be done away with and replaced by the military.
Huh. Really? Warnings five out of six times? Is that pretty common?
Every project (library, application, you name it) needs to be designed, by one or more people thinking deeply about the problem space. You can't expect software design to just happen, once the code has been written, it's already too late, see you in v2. You can't expect a 100 people to do it, there will either be a lot of conflicting visions, or no vision at all. And you can't expect to do it in your 1 hour of "free coding time" a day, because it doesn't give you enough time for deep thinking about the problem.
If you try to bypass design and solve it from "another angle", you get libtool, a solution that is a hundred times more complex than the problem it solves.
Look at successful open source projects (Rails, Go, even Linux). They were all initially designed by someone, handed down to the community to improve.They still have strong minded architects leading the effort. Now compare it to those random "Let's clone X!" threads that never produce anything.
So, there's cathedral thinking even in the bazaar. And it's the only thing preventing it from eating us alive.
Yeah, there are a lot of dependencies if you want to compile an entire linux distro from source. But it's just exposed because everything is right there. If you actually tried to figure out how to compile all the software on your Windows machine, would it be any better?
And to say that libtool, bloated and complex as it is, came about because someone was incompetent seems quite a bit insulting. It seems that confronted with a compatibility problem between different Unix-like systems, one has two choices:
(1) Coordinate changes to all Unix-like systems so the compatibility problem is removed, or
(2) Work around the problem with some sort of wrapper.
Now, in a perfect world, (1) is obviously the better alternative (but even that wouldn't help existing installations.) But the world is not perfect, and my chances of accomplishing (1) are practically zero, even if it's "just a single flag to the ld(1) command". Hence, anyone who actually wants to get something working on all those systems would have to do whatever it takes.
That said, there isn't anything that prevents people from having standards. Both FreeBSD and MacOS are pretty coherent UNIX type OSes. But it is important to realize that a lot of really innovative and cool stuff comes out of the amazing bubbling pot that is Linux as well. I sometimes wish it were possible to do a better mashup.
As an old-timer with ~30 years of programming experience, I have similar sentiments as the author about complex projects today, yet I also often feel that too much knowledge, accumulated in sometimes cumbersome form, is being thrown away and reinvented badly. There has to be a compromise somewhere and it's no surprise that projects in an old language like C, running on evolved systems like Unix, de facto standardized on Autoconf to make it a little easier for developers. Do I want to use it myself? Certainly not, I have the luxury of being able to choose a modern language that abstracts most (not all!) platform-specific issues away at compiler installation time, at the cost of having much fewer deployment options for my code.
Technology is more complicated than even a few years ago. It can do more. It is accessible to more people (and all of their unique needs and abilities). Computers have the ability to make an almost infinite number of interconnections with other computers.
The point is that a single person can't possibly keep track of a sufficient quantity of information to direct a sufficiently complex system anymore. And with the communication and development tools available today we are able to build these complex layered solutions without always having to worry about all of the other details that we can't possibly worry about.
Even if you look at the end result (the m4/configure/shell/fortran example) and it is indeed twisted, to honestly say it is abnormal to reach such a state is to disregard any experience developing software. Any project, even brought to life in the cathedral style, will accumulate cruft in the long run that can disappear only with effort.
I'm a believer that a much simpler/cleaner set of software tools could be created. But their wide-scale adoption would be more difficult.
> I updated my laptop. I have been running the development > version of FreeBSD for 18 years straight now, and > compiling even my Spartan work environment from source > code takes a full day, because it involves trying to > make sense and architecture out of Raymond's > anarchistic software bazaar.
of course, this didn't appeal to you back then, did it? ;)
MULTICS on the other hand...
The first is that it's too hard to converge things that have diverged. I pointed out an example in a Python library recently - the code for parsing ISO standard date/time stamps exists in at least 11 different versions, most of them with known, but different, bugs. I've had an issue open for two years to get a single usable version into the standard Python library.
Some of this is a tooling problem. Few source control systems allow sharing a file between different projects. (Microsoft SourceSafe is a rare exception.) So code reuse implies a fork. As the author points out, this sort of thing has resulted in a huge number of slightly different copies of standard functions.
Github is helping a little; enough projects now use Github that it's the repository of choice for open source, and Git supports pull requests from outsiders. On some projects, some of the time, they eventually get merged into the master. So at least there's some machinery for convergence. But a library has to be a project of its own for this to work. That's worth working on. A program which scanned Github for common code and proposed code merges would be useful.
Build tools remain a problem. "./configure" is rather dated, of course. The new approach is for each language has their own packaging/build system. These tend to be kind of mysterious, with opaque caching and dependency systems that almost work. It still seems to be necessary to rebuild everything occasionally, because the dependency system isn't airtight. (It could be, if it used hashes and information about what the compiler/linker/etc actually looked at to track dependencies. Usually, though, user created makefiles or manifest files are required. We've thus progressed, in 30 years, only from "make clean; make" to "cargo update; cargo build".
The interest in shared libraries is perhaps misplaced. A shared library saves memory only when 1) there are several different programs on the same machine using the same library, and 2) a significant fraction of the library code is in use. For libraries above the level of "libc", this is unlikely. Two copies of the same program on UNIX/Linux share their code space even for static libraries. Invoking a shared library not only pulls the whole thing in, it may run the initialization code for everything in the library. This is a great way to make your program start slowly. Ask yourself "is there really a win in making this a shared library?"
Shared libraries which are really big shared objects with state are, in the Linux/UNIX world, mostly a workaround for inadequate support for message passing and middleware. Linux/UNIX still sucks at programs calling programs with performance comparable to subroutine calls. (It can be done; see QNX. When done on Linux, there tend to be too many layers involved, with the overhead of inter-machine communication for a local inter-process call.)
Even when it comes to closed source commercial software development I really miss the days of code ownership. When I first started working as a programmer back in the 90's it was common for different members of the team to "own" sections of code in a larger system (obviously I don't mean "own" in the copyright sense, just in the sense of having one clear person who knows that bit of code inside and out (and probably wrote most of it). Of course we'd (preferably) still be beholden to code review and such and couldn't change things willy-nilly so as not to break the code of consumers of our code, but it was clear to all who to talk to if you needed some new functionality in that module.
The last few places I've worked for have been the exact opposite of this where everything is some form of "agile" and nobody "owns" anything and stories are assigned primarily based on scheduling availability as opposed to knowledge of a certain system. There is admittedly some management benefit to this -- easier to treat developers as cogs that can be moved around easily, etc, but my anecdotal belief is that this sort of setup results in far worse overall code quality for a number of reasons: lots of developer cache-misses when the developer is just bouncing around in a very large code base making changes to various systems day to day, lots of breadth of understanding of the system among all the developers, but very little depth of understanding of any individual component (which makes gnarly bugs really hard to find when they inevitably occur) and what should be strongly defined APIs between systems getting really leaky (if nobody 'owns' any bit of the code it is easier to hack such leaks across the code than define them well, and when your non-technical managers who interpret "agile" in their own worldview force developers to try to maintain or increase some specific "velocity" shit like this happens often).
Granted, there are some cases in which such defined ownership falls apart (the person who owns some system is a prima donna asshole and then everyone has to work around them in painful ways), but there were/are solutions to such cases, like don't hire assholes and if you do fire them.
I barely understand the voodoo magic behind libtool myself, but as PHK says, it "tries to hide the fact that there is no standardized way to build a shared library in Unix". I'd wager dynamic linking inherently poses such quandaries that are easier solved through kludges.
Hey, it's still probably better than WinSxS.
I've never seen developers so lazy or just uneducated about their own language that they blantantly pull in libraries for such trivial operations. On the server even, no excuse about compatibility!
My first thought was that it seems increasingly clear that Stallman has been right all along.
The amount of complexity - and the opportunities to hide things in that - has increased so much compared to earlier PCs that in some ways I think the development of computer systems is headed on a rather treacherous path. When systems are so complex that no single person can understand them entirely, it's easier to make them behave against their owner's will.
None of the time differences in all the test cases are significant at all. Concerning yourself with this is premature optimization of the highest level especially in a language like Python. One should definitely be more concerned about writing clearer and more idiomatic code.
There is a secret (or you will find it later):
- You have to pay 10$;
- or write a "great motivation";
to sign up.
Edit: hey hastagstartup.co downvote brigade, hashstartup.co is still available. So is hashstartups.com which is even better.
That question felt unanswered to me after 60 seconds on the landing page.
Shameless plug: If you're interested in the actual hashtag on Twitter, you could like to use of our free embeds:
Also let me know if you'd like to get detailed analysis of the #startup hashtag, I'd gladly contribute it for free.
The signup process is kinda neat, but some of the questions made me go "errr..."
edit: i see they(you?) use typeform for the signup form; never heard of them before, but pretty neat!
if you want a good set of papers that starts with perceptrons and hebbian learning to multi-layered neural nets and the emergence of what we now refer to as deep networks checkout http://deeplearning.cs.cmu.edu/
The frustrating part about this is that it is very hard to explain to coworkers why you feel this, because you intuitively expect them to feel the same way. I can recall some cases in which this has caused some friction when working on group projects.
His analysis that there are unreported vulnerabilities in TLS implementations sounds definitive enough to think he knows some of these vulnerabilities.
I've always found it curious how nonchalant people are with dangerous substances before it's realized how hazardous they can be. My grandfather loaded asbestos into railcars at a factory for 30 years, I still can't quite imagine how he must have felt when he fully realized the dangers it posed.
Interestingly, the article I linked missed the woman FTA, possibly because she quit early.
Radium was everywhere including toothpastes, shampoons (cure against hair loss).
Fortunately only a few people could afford Radium-enhaced products, since it was expensive. Side effects started to appear quite soon too.
There were other popular uses of radiation, until '50 many shoes shops had X-ray equimpent that allow to see how shoes fit [http://en.wikipedia.org/wiki/Shoe-fitting_fluoroscope]. Not that great idea, after all.
For me it is kind of warning against using everything that men invent or men can do. There are always things we don't fully know or understand. In the long term something could be a great danger for all of us.
Before we jump into next great thing it is good to stop for a while...
The whole Sprawl trilogy is fantastic, and while I agree with other commenters here that Gibson's subsequent novels have become somewhat less awesome, it's hard to complain too much about that if you believe, as I do, that the author in question's first attempt resulted in the best novel of all time.
Still, Neuromancer is indisputably dated, as any such work would inevitably be, so I am glad to have originally read it in the 1980s.
It however influenced so much good stuff,like Ghost in the shell which is basically the same plot,Deus ex and others.
I enjoyed the audio-book read by Gibson itself,it was excellent.
Makes me wonder if people who "get shit done" operate on that sort of do-or-die mental state, or how long it's possible to put yourself in that mental state without either burning out or breaking down. I've read similar anecdotes from people like John Carmack and Richard Feynman (again, pressurized during WWII).
It's almost like we're operating at 50% efficiency, maybe we go to 75% when we're really focusing, but actually only when we're in the self-preservation state, we go to 90+%
The books have gotten thicker, artier, more self-indulgent, and weaker.
I'm sure he'd like to recapture the magic he had at 34, but maybe it requires the fear he spoke of. And an absolute ignorance about computers and networks.
I think it shares more with The Maltese Falcon than with any SciFi.
Saw WG complaining about GamerGate recently and thought how much he's aged, and how ungracefully, since GG and Operation Disrespectful Nod reminded me of the Panther Moderns.
Is it going to be OK? I asked, my anxiety phrasing the question. He paused on the stair, gave me a brief, memorably odd look, then smiled. Yes, he said, I definitely think it will,
But seriously, I can't fathom myself doing half the stuff being listed in the post in a year.
As someone working in ASIC design and verification, that code is not a good read IMO. I am stunned it even synthesized. The use of "initial" and "task" is not generally for describing hardware.
I hope he reads both these books consecutively. These two books are worlds apart but both equally excellent. It would be an interesting contrast.
" Number of books written: 0 "
They always include a model of their "spacecraft" in every shot. In this video, https://www.youtube.com/watch?v=8TKTsAa4sSs, they already have a pilot, random equations on the blackboard, and a spread of meters in front of the person speaking.
Their lead scientist is a physics student, who is the president UNO Paranormal Society.
It all seems rather fishy, but I'll withhold my judgement until someone comes and reproduces or falsifies their claims.
I truly believe if someone creates a warp drive it will indeed be someone like this guy operating out of a garage and not NASA.
The Wright brothers did not break any barriers in basic science, they solved an engineering problem. The basics of fluid dynamics and prerequisites to flight were established in the late 19th/beginning of the 20th century. There were plenty of details to sort out and experiments to do, but the scientific foundation was there. The problem of flight had been reduced to the engineering problem of improving two ratios: Thrust/Weight and Lift/Drag.
What this man in his garage is proposing he's done would be earth shattering if the demo he's impressed upon a rather impressionable reporter (and several HN commenters) were real. More impressive than anything the $10 billion LHC could hope for, but he hasn't, let's be clear.
What's actually happened is a guy tinkering in his garage with electromagnets has gotten himself and a few people surrounding him caught up with grandiose ideas. Electromagnets aren't blocked by faraday cages, which easily explains away any warping he thinks he's accomplished.
You shouldn't need technical knowledge to figure this out though, because the top half of that article isn't about what he's doing, it's about the underdog ignored maverick thinker making revolutions in his garage. It's in an online newspaper hosted at omaha.com. It's written by someone with obviously little scientific training.
Personal bollocks filters are important to develop.
There are so many vulnerabilities in the baseband that it's not even funny. Even the QCOM secure boot process is full of holes. If a government agency wanted to drop a persistent baseband 'rootkit' on your device with full access to userspace, they could (unless you're using one of the few phones with separate userspace and baseband processors).
The DIAG commands are particularly fun. You can read and write memory on most phones. Some have locked it down to certain areas, but this varies wildly depending on manufacturer.
According to a note in this presentation, Ralf-Philipp Weinmann has noted exploits on broadband processors from both.
It'd be interesting if reverse engineering of the baseband could find those capabilities and see what's really possible and how it works.
It's both fascinating and frightening.
After reading a post in HN (https://news.ycombinator.com/item?id=5614689) entitled "why you should write every day", I've being doing it daily in a private blog. I do it in English to improve my second language. My main language is Portuguese.
I'm doing it since 09/22/2014. I try to write about my own ideas, because I believe is the right thing to do and it is the best subject to improve myself. It's not an easy task, and I do not feel I'm improving yet, but something in me tell me that I should keep doing it.
Writing about a topic is a good test of whether you can explain it simply.
(yeah I've written one book with a readership of thousands, 5/8ths of a phd with a readership of 5 and a bunch of academic articles with a variable readership)
In my opinion, programming has always been a form of writing. Just like songwriting is a form of writing. It's simply a different medium, and therefore you get a different result.
I might be looking at it from a different kind of lens though.
> A core skill in both disciplines is an ability to think clearly. The best software engineers are great writers because their prose is as logical and elegant as their code.
I got the important takeaway from that experience, but many people do not, and it is a shame.
 - https://blog.joeblau.com/
 - https://github.com/joeblau/gitignore.io
I agree that writing can be helpful for many things, such as expressing emotion, or telling stories, or just a journal. In those scenarios, it's not dangerous to get it wrong. No one will lose their way in a technical project because you can't write cleanly about your dog.
Write anything except technical articles, until someone comments with something like "this was really well written!" Then you can consider adding to the painful cloud of tech articles.
If you release a tech blog post without editing (and largely re-writing) it a minimum of three times, stop doing it. Seriously. You're not helping.
Also, if you're a newcomer to a tech field and get discouraged by trying to learn about something from online resources, 90% of the time it's not you. It's the author being unable to clearly present ideas. Don't get discouraged!
As a hobby I've done a lot of reading around this; I've written three feature-length screenplays, and a novel you can find in Amazon, using very structure-centric approaches (as a result, my characters tend to be too flat).
Take a look at The Snowflake Method, unsurprisingly designed by a novelist who is also a theoretical physicist. Even with The Hero's Journey, there's a surprising amount of well-understood structure behind every story.
 http://www.amazon.com/dp/B00QPBYGFI http://www.advancedfictionwriting.com/articles/snowflake-met...
I think he might mean that writing for human consumption can be harder than most people think.
Pinker uses software terms to describe good writing: convert a _web_ of ideas into a _tree_ of syntax into a _string_ of words.
> Scientists are finding that there may be a deeper connection between programming languages and other languages then previously thought. Brain-imaging techniques, such as fMRI allow scientists to compare and contrast different cognitive tasks by analyzing differences in brain locations that are activated by the tasks. For people that are fluent in a second language, studies have shown distinct developmental differences in language processing regions of the brain. A new study provides new evidence that programmers are using language regions of the brain when understanding code and found little activation in other regions of the brain devoted to mathematical thinking.
It's possible to stay as an average engineer for a long time, but if you want to try being an Architect, then at least 50% of your time is spent writing or public speaking. If you want to be an engineering manager, that's over 90%.
Fortunately a company I used to work for believed pretty strongly in cultivating these "soft skills", so they incentivized things like Tech Talks, and covered the cost of courses like Dale Carnegie.
But don't feel you "should". Essays are a bit like code - but if you want to get better at coding, you'll do better practicing coding than practicing essays. Likewise if your goal is "impact"; blog posts, particularly general ones like this, are ten-a-penny - even really good ones. Whereas really good software libraries are rare, even now - and you're more likely to write a specialist software library, with a small audience but one for whom that library is vital, than an equally specialist blog post. And while writing about something may clarify your thoughts, it's nothing next to setting that thing down in code.
Once again, do what you enjoy. If you like to paint, paint; if you like to make music, make music. But if you'd rather just code, or even just watch TV (the very epitome of unproductive wastefulness - but the typical blog probably achieves very little more), that's fine too. Don't let anyone tell you you shouldn't.
I've actually found that it helps me think about more of the big picture stuff. In writing my first post about one of our APIs  I actually realized that there was a small omission in how we designed it.
Before submitting a story, spend some time introducing yourself to the community - post diaries, as well as reply to the diaries and comments of other kurons.
I think being able to write is extremely important, but I think the rhetoric behind writing is just as important, if not more important. When you write in a community or forum, like HN, citing your sources and defending your arguments is more important than on a blog, because if you don't, your voice simply won't be heard as loudly.
Contrast this with clickbait blogs, or blogs that simply write for shock, it because clear that having a humorous or convincing writing style is almost as important as being able to argue your point, or convey a complicated idea. However, in my mind I find the latter a far more important skill in the long run.
So yes, software engineers should write, but also don't forget to do some 'code' reviews.
Which in a way helps prove one of the article's points: writing and programming are alike in their need for precision and clarity. :)
I also believe that the engineers should write code everyday as well.. many of the engineers take at least a day off from the week and it somehow reset's mind a little
Also, related link on writing: https://news.ycombinator.com/item?id=8793024
The problem with writing is that you usually do it by serializing your thoughts in one go. Programming on the other hand is an activity where you almost randomly jump from one point to the other.
> Code and essays have a lot more in common.
What I hate about writing prose, is that you are expected to use synonyms all over the place. If you use the same word in two subsequent sentences, this is considered "bad". With programming, I have no such problem.
Should you take it to the extreme Knuth did with Literate Programming? I personally don't. But, once I've successfully explained to the computer what it should do (my program works) I look for ways to better communicate what it's doing (my program is readable). In many cases that's harder than solving the technical problem at hand.
Concision and simplicity seem to be the key to that. I agree with the author that "like good prose, good code is concise," although for prose that's more a matter of taste. Otherwise we'd all be reading a lot more Hemingway.
Obligatory link: ecc-comp.blogspot.ca
I've found a lot of overlap, in thought process, between programming and writing. Also with music and math. Leverage everything that helps, I think.
"I don't know what I think until I try to write it down."
The irony is that the precision of CS makes us better writers because we can see the inconsistencies. (How many requirements documents can be interpreted multiple ways?) Between undergrad and grad the Math/Verbal spread on my standardized tests flipped.
I wasn't good in math, languages or anything in elementary school.
I didn't want to be there and always played "sick".
This just got a little bit better, when I left elementary school and switched 2 schools afterwards. Since the second school was a lot easier than the first, I got better grades without doing anything.
But I never got really good at anything at school, better in Science than in Humanities, always a B- on average. Even my degrees got that rating...
Now, I just need to learn a programming language and start coding so I can improve my human/computer communication skills :)
Maybe software engineers should write less.
As writers say, know your audience.
There is little so obscure as undocumentedcode.
An old software joke goes, "When codeis written, only the programmer andGod understand it. Six months later,only God.".
As a result, for continued understandingof code, documentation, to explain the codeto a human reader, is crucial. In simpleterms, to humans, code without documentationis at best a puzzle problem in translationand otherwise next to meaningless. Use of mnemonicidentifier names to make the code readablehas created a pidgin-like language that isusually unclear and inadequate.
Thus, writing documentation is crucial,for the next time the code needs tobe read and understood, for users, etc.
Thus, net, after too many years with codeand softwarem=, I claim (big letters in sky writing, please):
The most important problem, and a severebottleneck, in computing is theneed for more and better technical writing.
My suggestion for some of the best models ofsuch technical writing are a classictext in freshman physics, a classic textin freshman calculus, and, at times, aclassic text in college abstract algebra(for examples of especially high precisionin technical writing). Otherwise I suggestKnuth's The Art of Computer Programming.
First rule of technical writing: A word usedwith a meaning not clear in an ordinarydictionary is a term, in technical writing,say, a technical term. Then, before a termis used in the writing, it needs a definition, that is, needs to have beenmotivated, defined precisely (maybe evenmathematically), explained,and illustrated with examples. Then wheneverin doubt, when using the term, include a linkback to the definition. So the first ruleof technical writing is never but never usea term without easy access to the definition.Similarly for acronyms.
Biggest bottleneck in computing .... Sorry'bout that. YMMV. </rant>
1) their writings will be preserved with the same power that libraries afford traditional scientific publishing
2) They wont just be blogging, they will be publishing.
3) They can assign a digital object identifier (DOI) at their discretion making their work "count" in the scholarly literature.
4) Their blog will be automatically formatted as a PDF.
Correct way of getting huge percentages is saying it used to be X% More. You can get MORE huge percentages you can get LESS less than 100%.
So we developers keep adding the comfort noises, as well.
At least Mint didn't use the AOL's "File's done" announcement :)
It gives you feedback (which increases safety) but it raises trust issues.
The Wikipedia page has a screenshot for those who haven't seen it: https://en.wikipedia.org/wiki/Magic_Cap
All children (and most adults) crave attention, and the giving and taking of it is perhaps the most powerful tool that a parent and/or teacher has in the arsenal.
I've always thought a lot of parents do it totally wrong...they give the misbehaving kid lots of (albeit negative) attention while the quiet, well-behaved one they ignore.
My parenting style was the exact opposite...I always stroked good behaviours and tried my best to ignore bad ones.
The comment What an idiot I was, I thought. That was just an axiom, it is called commutativity. One doesnt prove axioms. is interesting. What's chosen as an axiom, and why, is an advanced question. Unless you get into foundations of mathematics, that question is seldom addressed. It's way beyond most pre-college math teachers. It's the sort of question that occurs to smart kids, but there's no easy answer you can give them. The usual answers are theological, and boil down to "shut up, kid". Here's a discussion on Stack Exchange of that subject: http://math.stackexchange.com/questions/127158/in-what-sense...
(If you get into automatic theorem proving, you have to address such issues head-on. Adding an inconsistent axiom can create a contradiction and break the system. This leads to constructive mathematics, Russell and Whitehead, Boyer and Moore, and an incredible amount of grinding just to get the basics of arithmetic and number theory locked down solid. In constructive mathematics, commutativity of integer addition is a provable theorem, not an axiom.
I once spent time developing and machine-proving a constructive theory of arrays, without the "axioms" of set theory. The "axioms" of arrays are in fact provable as theorems using constructive methods. It took a lot of automated case analysis, but I was able to come up with a set of theorems which the Boyer-Moore prover could prove in sequence to get to the usual rules for arrays. Some mathematicians who looked at that result didn't like seeing so much grinding needed to prove things that seemed fundamental. This was in the 1980s; today's mathematicians would not be bothered by a need for mechanized case analysis.)
This is the probably the most insightful part of the AMS narrative:
>OK, if it is so hard to teach kids the notion of a number, what am I trying to do? What is the point of my lessons? I said it many times and I am going to say it again: the meaning of the lessons is the lessons themselves. Because they are fun. Because its fun to ask questions and look for the answers. Its a way of life.
If math pedagogy is your main interest any of the MSRI Math Circle Library books are worthwhile. This includes "Circle in a Box" which is a Math Circle starter kit freely available here http://www.mathcircles.org/GettingStartedForNewOrganizers_Wh...
You could have a whole set of sessions with children exploring different arrangements of coins and noting that no matter how many you add within the hull, you don't get any 'more coin' (altering the plurality may help adults understand this problem.) If you have some button[s] and much more coin[s], can you add just one coin so that you have more coin than button? How far away do you need to add it?
If reports are true that the self-driving car will start testing next month, I predict 2015 may be the year that autonomous vehicles go big. The true goals of Lyft and Uber will finally be accomplished. I hope there won't be much red tape getting these cars into action (although there probably will be).
1) fast (instant) performance
2) end-to-end encryption (I wouldn't mind if they used the same Axolotl protocol as TextSecure and Whatsapp)
If it doesn't have any of those, I won't be using it.
Very interesting to learn there was one man who helped give birth to both fields.
We are using operations research optimization techniques at my side project, StaffJoy. The greatest innovation in usability of OR has been the JuMP project - there is now a fairly universal way to express optimization problems that is lower-level than Excel and higher-level than C.
His book from 1948 is probably first book about applied programming: https://archive.org/details/ComputingMechanismsLinkages
Then wait for v8 performance: http://jsperf.com/performance-frozen-object ES5 is doing good...
And at last: Wait for < IE-11 to die.
It remains to be seen if io.js will boosts ES Next for the server.Since bound to v8 i don't expect much.
Other implementations like the ahead of time compiler echojs become interesting. I am also curious how Typescript will look at v2.0.
I am ready, however I still don't use arrow functions... Which were first heard of in 2010? 2011?
It still feels like so far away.