For LIVE project page go to KICKSTARTER LINK

Crawlspace graphic

This is the Kickstarter project LAUNCHED on Monday 8 July 2013 by Larry Dickson of LAZM (Lost Art of Zero Maintenance). It's contrarian computing!

What we need to be able to do

Crawl-Space Computing is intended to be a practical guide to Wide Computing. This means that by using this guide, it must be possible to do a practical project --- scientific, or robotic, or embedded --- of any desired degree of complexity. We will therefore show all the steps of a real run in full detail:

Many people are surprised that the communication superstructure comes first, and most neglect the shutdown step. But we'll fully justify these steps --- right down to individual bits being tweaked --- and show how they always reach the goal. Once we know that, we can deal with code and hardware of any complexity. The same principles apply both to outer parts (including cabling) and to inner parts that act independently, even if they are temporary.

Progress report! See Connel Outline 8/6/2013.


Q: Can we please have a clear view of that graph you showed in the video? The one that says things are 50 times worse now than in the 1960s?

A: Here it is:

Crawlspace graphic

My big claim is that the rate of progress has not increased. That is actually being generous. It has declined. In the 1960s we were progressing scientifically by leaps and bounds. (I was there!) Now progress is in small implementation details like merging phone and computer to get "smart phone."

The rest of it (I did it for DARPA in 2007) is clear. Just track the increase in the number of professionals, and, once again conservatively, add only 25% of the increase in the power of the tools used by the professionals. That shows how much effort is being thrown in.

Why? Because Deep Computing requires coping with everyone else's namespace, methods, and drivers. That burden increases exponentially with time. That's why the stagecoach is bogged down!

Crawlspace graphic

Q (Ian Hirschsohn): You mention RS232 COM as an interconnect. Why not settle on an i/connect method that is CURRENT? COM and LPT belong in a museum. Even for robotics. I have been dragged kicking and screaming into the USB spec.

A: I am not championing RS232 over USB, or any form of communication over any other. Instead, I am championing (and giving tools to make it possible) the placing of communication and timing design on the OUTER layer of programming, instead of hiding it in drivers or namespace-centric "methods", as almost all languages currently superseding C and Fortran insist on doing.

Once you do that, connecting things (even in an unplanned fashion) becomes comparatively easy, and you can also develop the pieces independently (by supplying dummies to generate and take up data and timing flows). Without it, even something as massively developed as Mac iMovie becomes a nightmare to deal with (as my daughter Alice just found out) because there is no way to control the huge, obvious data flows - just dozens of mysterious buttons that never quite do what you want.

The heart of the teaching will be a design pseudocoding method that will give practical designs with linear effort no matter how complex the problem. Connel is then a convenient tool to make sure that component pieces truly behave according to spec.

I never actually mentioned "RS232" but I did mention "serial". In real-world applications which I have had to cope with, the need for serial IO arises and we use whatever hardware offers "serial" as a subset of itself. This has included USB, Bluetooth, and real 2-wire or 4-wire RS232 serial. I2C is also common but seems to be used within a board rather than between devices. Connel has successfully handled them all, and lets you use the same code. What you do is place a layer of software around each peculiar piece of hardware, and outside that layer of software they all look the same. It's like a driver, except it is DESIGNED RIGHT - not a silly imitation of a math call (as "read(x, y)" imitates "atan2(x, y)") but a responsive data gatekeeper.

Q (Ian Hirschsohn): ON-CHIP multi-core parallel procs are widely available on eBay for a handful of $. What about a RELATED problem that is CRYING for a solution: a low-entropy, simple communication scheme between ON-CHIP parallel cores? Win/OS X/Linux via C++ multitasking is extremely high entropy ie, the overhead of setting up and managing the multiple tasks is so high that in many cases it is more efficient to just use the main core and skip parallel procs.

A: I already have patents and patents pending to deal with these problems. Development is part of the "language" and "real-time, space-travel-quality hardware" mentioned in the Angel and Partner Specials. Though beyond the scope of this project as such, I hope to lead into it.

However, the book will include techniques for dealing with the multicore problem in simple cases. Note that "simple cases" include many of the common "processing for half an hour" waits in common uses of big data. Examples: movie and video processing, multiple overlaid signal postprocessing, high-frequency signal and noise emulation, and seismographic postprocessing. Almost all of these are extremely parallelizable, as soon as you get control of the data stream - the specialty of Wide Computing.

Q: You spoke of a "duality" between Wide and Deep Programming in the video. Can you be more precise about some details of this duality?

A: My paper, "Occam Road Map for the DOS PC," published in the 1996 PDPTA proceedings, pages 1010-1019, contains the following table:

Contrast between paradigms

Resource allocation
Infinite pool
Explicitly programmed
Explicitly limited
Configuration decisions Run time Load time
Initial state State of all enclosing callers Explicit load-time setup parameters
Component life During call only Eternal
Typical task ordering Stack FIFO
Normal stack state Deep Empty
Information passing Calling sequence, global variables Process-to-process (channel) transmission
Program control Serial Parallel
Model Compiler parse Connected hardware

I have adjusted the headings to conform to current terminology, but otherwise left it the same. Unlike the video, "wide" is on the right hand side.

Please notice the word "Emphasis." Neither paradigm completely excludes the other side of the duality. It is amazing how close they can come, though. Transputer occam coding was capable of fully complex programs, including an autonomously controlled automobile in 1993 on the German Autobahn, with a maximum stack depth of six words!

An example from my professional work

The following diagram shows one of the services offered by LAZM. This applies to valuable scientific instruments whose supporting computers are so old (often 1980s vintage) that their IO media are unavailable and their parts cannot be replaced. Their raw data is slow, so modern CPUs can easily keep up - if Wide Computing techniques are used to prevent hiccups.



               - > -  data/timing   - > -

              /     \ transmission /     \

             /       \  detours   /       \

             |       |            |       |

      ------ | ----- | ---------- | ----- | ------

      |       \  |  /              \  |  /       |

      |        \ | /                \ | /        |

      |          |                    |          |

      |  Input   |       Legacy       |  Output  |

      |   and    |        Core        |  Signals |

      | Control  |        Code        |    and   |

      |          |                    |   Data   |



Lawrence J. Dickson, PhD (Mathematics)
Project Creator

Larry Dickson earned his PhD in Mathematics from Princeton University. He has specialized in robust solutions to difficult computing problems, taking proper account of data flow, timing, and multiprocessing ("Wide Computing").

Typically, Larry and his Wide Computing colleagues have been called in when a project is in danger due to the failure of standard methods. Larry's resume includes: high-resolution graphics networking with image processing (ink adjustment) of transmitting files (Superset); prototype automotive radar (SuperComputing Surfaces for Ford Motor Company); and holographic imaging (SCS). The Ford work included a Transputer-based multiprocessing decimator board, hardware designed and programmed by Larry.

More recently, Larry helped invent RAID data redundancy algorithms, coded them in a system-independent RAID core, and helped integrate this with Linux in a commercial product (IceNAS RAIDn) that drives up to 18 disks simultaneously (InoStor/Tandberg Data). He invented a low-power-usage massive data storage system and aided in maintenance of the Linux distribution Edgeware (Cutting Edge Networked Storage). He developed Connel in support of a more recent project, contributing mathematical and robotics code for another inventor, Dave Swanson of MeasureBot3D LLC. Finally, Larry has recently become involved in mathematics and coding of the VERNE satellite in the KickSat project created by Zachary Manchester on Kickstarter.

All the above projects have been technical successes due to Wide Computing techniques. Larry is experienced in C, assembly (X86, MSP430, PIC), Fortran, Perl, and the parallel language Occam. In current work, he does business under the trademark "Lost Art of Zero Maintenance" (LAZM), an allusion to one of the side benefits of Wide Computing.

Larry is inventor in several patents and patents pending, mostly for projects mentioned above. Most recently as sole inventor he has received U.S. Patent 7,219,289 "Multiply redundant raid system and XOR-efficient method and apparatus for implementing the same" (owned by Tandberg Data) and U.S. Patents 7,512,718 and 7,822,882 "Reconfigurable computing array without chassis" (owned by LAZM). Larry's most recent invention (patent pending) is ITOCA (In-Time On-Chip Army), which will cross-fertilize with this Connel project.