I was in a meeting earlier this week, during which I blurted out something about how I don't really consider much of a difference between prototype and production code, and I'm still trying to figure out what I meant.

My career has been pretty much situated in its entirety in weirdo R&D territory. The code I write has no warranty of merchantability or fitness for a particular purpose, as the saying goes. It nevertheless has found its way into production on numerous occasions. If there are any properites that demarcate prototype code from production, I'd say they converge toward:

Obviously if the prototype is in shell and the production is in C, the former will have to be rewritten into the latter. But, if prototype and production are both written in, say, Python or Ruby, there's a good chance at least one chunk of the prototype will end up in production verbatim.
Performance, ex. language
Typically when we write prototypes, we don't worry too much about loading giant files into memory or generating a lot of storage or network I/O. Same goes for sloppy algorithms with nested loops, or loops with black-boxed calls out to libraries that do any combination thereof. Except when the poor performance gets in the way of running the prototype over and over again, which is something you do a lot with prototypes. I find myself cleaning up at least some of this mess as I go.
Error handling
A similar condition holds as for performance: you run the prototype, over and over, over a bigger and more variegated input set. It's bound to crap out eventually, and you're gonna need to know why. All the places you suppressed errors, you're eventually going to have to go back and add some kind of handler for them. If you know that's the right thing to do at the outset, there's no sense in restraining yourself.
I spent most of my career as a developer heavily informed by the infosec community, so including security features in my code is almost a compulsion. However knowing that environments where the prototype and production code are the same language, this compulsion becomes rational, because if I don't do it now, there's no telling whether somebody else will do it later. It's not like it's hard, or takes a lot of time: almost all security programming reduces to making sure the data you just ingested corresponds to what it claims to be, or at least what you assume it ought to be. In other words: input sanitation, which intersects heavily with error handling directly above.
Of course production code, at least in theory, is supposed to be comprehensively documented. In a prototype, this is arguably a waste of time. I may be betraying one of my idiosyncrasies here, but I disagree. I write prose documentation in prototypes all the time, but my motivation is distinctly different. It's much more akin to literate programming, a story about how I want the process to behave, rather than a spec or an operator's manual—although those feature too. Moreover, I include naming things like classes and methods and method signatures under the umbrella of documentation, and for much the same reason—that is, the tendency for prototypes to get appropriated into production. I do give those some non-zero consideration since it's so much harder to change the names of things once those names have been minted.
Test suite
Perhaps the one place I am decidedly prototype is in the test suite, however that is largely because in the early stages of a piece of software it is rarely clear precisely what needs to be tested and how. Nevertheless, automated tests are useful in prototypes for things you can't eyeball, and I find they accumulate organically, so it's best to assume up front that they will.
One essential feature of production software is that a user—for some definition of the term—should be able to install it. Nowadays, with distributed teams, each member running their own self-contained development environment on their laptops, even prototypes have users. Furthermore, most modern languages have extremely sophisticated boilerplate generators, such that to not package your code is actually going out of your way.

One of the more memorable experiences of my software development career has to be a 20-minute argument I had with a boss over a 2-minute expenditure on some input sanitation in some module or other, which had been largely generated from boilerplate. The charge was, of course, that I had wasted time, which I suppose was true if I was also to be held responsible for the order-of-magnitude blowup over it, but in my judgment the input sanitation itself was a good investment.

The reason why, is because if you allow me my precious two minutes to exercise what I concede is ultimately a tic—implementing an input sanitation routine—then I don't have to second-guess the code further down the line. As the author, I am outsourcing a great measure of my cognitive overhead into the artifact—my own and that of anybody who comes after me. If I don't have to occupy myself with trivia, like whether or not an algorithm is actually being fed the right data, then progress on the whole goes faster.

Alan Kay once said If you choose the right data structure, most of the computation is already done for you. In a way, the prototype is itself a kind of computation: you are computing the optimal configuration for a formal process, along with its concomitant structure. The difference is this computation is happening inside your head. Why would you deliberately choose a suboptimal configuration when you were aware of an optimal one? Because of a one-time savings of two minutes?

By definition, a prototype has to encode at least the rudiments of the process to be carried out in production, or it doesn't demonstrate anything. To evolve production software directly out of a prototype when both are written in the same language, I believe is a perfectly natural and sensible thing to do. So is including features of production code in the prototype when it would cost more to argue about than just to do it, and to do so would only improve the output.

Anyway, I think that's what I was getting at.