Given that interesting apps will use multiple languages/frameworks (if not at the productivity layer, then at the efficiency layer), we should be working on portable in-memory and on-disk data formats for various types of ML models (and fast swizzling/unswizzling). Use Google Code Protocol Buffers and define some standard schemata?
- Past Projects
- Computer Questions Asked by Non-Computer People
- History of Computing