Here's some Python code for reading serial input from Optoforce's 3D sensor and sending it over OSC to MaxMSP or SuperCollider.

The slightly odd baudrate of 1000000 isn't supported in SuperCollider nor MaxMSP under macOS, so I had to use Python for this.

#for the 3d sensor OMD-30-SE-100N
#f.olofsson 2017

#first argument is serial port, second ip and third port.  e.g.
#python '/dev/tty.usbmodem1451' '' 9999

import sys
from struct import *
from threading import Thread
import serial
from OSC import OSCServer, OSCClient, OSCMessage, OSCClientError

osc= OSCClient()
if len(sys.argv)>3:
  osc.connect((sys.argv[2], int(sys.argv[3])))  #send to address and port
        osc.connect(('', 57120))  #default send to sc on same computer

serport= '/dev/cu.usbmodem1411'
if len(sys.argv)>1:
  serport= sys.argv[1]

ser= serial.Serial(
  port= serport,
  baudrate= 1000000,
  parity= serial.PARITY_NONE,
  stopbits= serial.STOPBITS_ONE,
  bytesize= serial.EIGHTBITS,
  timeout= 1
print('connected to serial port: '+ser.portstr)

def oscInput(addr, tags, stuff, source):
  print stuff  #for now do nothing

server= OSCServer(('', 9998))  #receive from everywhere
server.addMsgHandler('/optoforceConfig', oscInput)
server_thread= Thread(target= server.serve_forever)

print('sending osc to: '+str(osc.address()))
print('listening for osc on port: '+str(server.address()[1]))

###configure sensor (optional)
conf= bytearray(9)
speed= 10  #0, 1, 3, 10, 33, 100 (default 10)
filter= 3   #0 - 6 (default 4)
zero= 255   #0, 255
checksum= 170+0+50+3+speed+filter+zero
conf[0]= 170
conf[1]= 0
conf[2]= 50
conf[3]= 3
conf[4]= speed
conf[5]= filter
conf[6]= zero
conf[7]= checksum>>8
conf[8]= checksum&255

def main():
  while True:
    header= unpack('BBBB', b)
    if header==(170, 7, 8, 10): #data
      counter= unpack('>H', b[0:2])[0]
      status= unpack('>H', b[2:4])[0]
      xyz= unpack('>hhh', b[4:10])
      checksum= unpack('>H', b[10:12])[0]
      sum= (170+7+8+10)
      for i in range(10):
        sum= sum+ord(b[i])
      if checksum==sum:
        #print(counter, status, xyz)
        msg= OSCMessage()
        except OSCClientError:
          print 'osc: could not send to address'
        print 'data: checksum error'
        print checksum
      if header==(170, 0, 80, 1): #status
        status= unpack('B', b[0])[0]
        checksum= unpack('>H', b[1:3])[0]
        if checksum!=(170+0+80+1+status):
          print 'status: checksum error'
          print checksum
        print 'header: serial read error'
        print header

if __name__ == '__main__':
  except KeyboardInterrupt:

Optoforce 3D sensor with MaxMSP from redFrik on Vimeo.

Optoforce 3D sensor with SuperCollider from redFrik on Vimeo.

work with mark: RedUniverse - a simple toolkit

I made a short demo/poster session at the LAM conference 19 December 2006 in London. See
Below is the handout describing the toolkit.

This toolkit is now distributed via SuperCollider's package system quarks. All open source.
How to install:


RedUniverse - a simple toolkit

Mark d'Inverno & Fredrik Olofsson

This is basically a set of tools for sonification and visualisation of dynamic systems. It lets us build and experiment with systems as they are running. With the help of these tools we can quickly try out ideas around simple audiovisual mappings, as well as code very complex agents with strange behaviours.

The toolkit consists of three basic things... Objects, Worlds and a Universe. Supporting these are additional classes for things like particle systems, genetic algorithms, plotting, audio analysis etc. but preferably many of these functions you will want to code your self as a user.

We have chosen to work in the programming language SuperCollider ( as it provides
tight integration between realtime sound synthesis and graphics. It also allows for minimal classes that are easy to customise and extend. SuperCollider is also open for communication with other programs and it run cross-platform.

So to take full advantage of our toolkit, good knowledge of this programming language is required. We do provide helpfiles and examples as templates for exploration, but the more interesting features, like the ability to live-code agents, are hard to fully utilise without knowing this language.

Detailed overview

In SuperCollider we have the three base classes: RedObject, RedWorld and RedUniverse.

RedObject - things like particles, boids, agents, rocks, food etc.
RedWorld - provides an environment for objects.
RedUniverse - a global collection of all available worlds.

Objects all live in a world of some sort. There they obey a simplified set of physical laws. They have a location, velocity, acceleration, size and a mass. They know a little about forces and can collide nicely with other objects.

Pendulums are objects that oscillates. They have an internal oscillation or resonance of some sort.

Particles are objects that ages with time. They keep track of how long they have existed.

Boids are slightly more advanced particles. They have a desire and they can wander around independently seeking it.

Agents are boids that can sense and act. They also carries a state 'dictionary' where basically anything can be stored (sensory data, urges, genome, phenome, likes, dislikes, etc). Both the sense and act functions as well as the state dictionary, can be manipulated on the fly. Either by the system itself or by the user in runtime.

Worlds provide an environment for the objects. They have properties like size, dimensions, gravity etc and they also keep a list of all objects currently in that world.
For now there are three world classes:
RedWorld - endless in the sense that objects wrap around its borders.
RedWorld2 - a world with soft walls. Objects can go through but at a cost. How soft these walls are and how great the cost is depends on gravity and world damping.
RedWorld3 - a world with hard walls. Objects bounce off the borders - how hard depends on gravity and world damping.

The Universe is there to keep track of worlds. It can interpolate between different worlds. It can sequence worlds, swap and replace, and also migrate objects between worlds. All this while the system is running.
The RedUniverse class also does complete system store/recall to disk of all objects and worlds.

So the above are the basic tools. They should be flexible enough to work with e.g. objects can live in worlds of any number of dimensions. But as noted, one can easily extend functionality of these classes by subclassing.


How the objects and worlds behave, sound and look like are open for experimentation. That is, this is left for the user to code. So while there is great potential for customisation, it also requires more work form its users.
The RedUniverse as a whole tries not to enforce a particular type of system. E.g. one can use it purely without any visual output or vice-verse.
We see it both as a playground for agent experiments as well as a serious tool for music composition and performance. We hope it is simple and straightforward and while there is nothing particularly novel about it, we have certainly had fun with it so far. Foremost it makes it easy to come up with interesting mappings between sound and graphics. In a way we just joyride these simple dynamic systems to create interesting sounds.

The software and examples will be available online on the LAM site. Of course as open source.

work with mark: genetics

I also spent time at UoW learning about genetic algorithms and genetic programming. Mainly from John H Holland's books and Karl Sims' papers. I found it all very interesting and inspiring and again I got great help and input from Rob Saunders.

One of our ideas was to construct synthesis networks from parts of our agents' genomes i.e. to have the phenomes be actual synths that would synthesise sound in realtime. The first problem to tackle was a really hard one. How to translate the genome - in form of an array of floats - into a valid SuperCollider synth definition?
Of course there are millions of ways to do this translation. I came up with the RedGAPhenome class which works with only binary operators, control and audio unit generators. Unfortunately there can be no effects or modifier units. On the other hand the class is fairy flexible and it can deal with genomes of any length (>=4). One can customise which operators and generators to use and specify ranges for their arguments. One can also opt for topology of the synthesis network (more nested or more flat).
There is no randomness involved in the translation, so each gene should produce the exact same SynthDef. Of course generators involving noise, chaos and such might make the output sound slightly different each time but the synthesis network should be the same.
This class produces a fantastic range of weird synths with odd synthesis techniques, and it is useful just as a synth creation machine on its own. Here are some generated synths... n_noises, n_fmsynths, and corresponding 5sec audio excerpts are attached below.

Then, after the struggle with the phenome translation, the code for the actual genetic algorithms was easy to write. The genome and its fitness are kept in instances of a class called RedGAGenome, and the cross breeding and mutation are performed by the class RedGA. There are a couple of different breeding methods but I found the multi-point crossover one to give the generally best results. All the above classes and their respective helpfiles and examples are available here. And there are many more automatically generated synths in the attached krazysynths+gui.scd example below.

I also made a couple of fun example applications stemming from this. One is a six voice sequencer where you can breed synths, patterns and envelopes. It is attached as 'growing soundsBreedPatternEnv.scd' below. (Note that the timing is a bit shaky. I really should rewrite it to run on the TempoClock instead of the AppClock.)

Ref articles:

Frankensteinean Methods for Evolutionary Music Composition, Todd and Werner
Sounds Unheard of – Evolutionary algorithms as creative tools for the contemporary composer, Palle Dahlstedt
Evolutionary Design by Computers, Peter J. Bentley
Artificial Evolution for Computer Graphics, Karl Sims
Evolving Sonic Ecosystems, Jon McCormack

Ref books:

John H. Holland - Hidden Order: How Adaptation Builds Complexity
Melanie Mitchell - An introduction to Genetic Algorithms
Richard Dawkins - The Blind Watchmaker





update 101128: growing_soundsBreedPatternEnv.scd file updated, also see this post.
update 171229: converted some rtf files to scd and made the GUI run on latest SuperCollider (Qt)

work with mark: istreet - online

Taking the Intelligent Street project further, Mark d'Inverno wanted me to try to get it online. The idea was to let people surf to a webpage, send commands to a running iStreet system and hopefully collaborate with other online users to compose music. Just like in the original SMS-version of the piece, everybody could make changes to the same music and the result would be streamed back to all the online users.

The first prototype was easy to get up and running. We decided to use Processing and write a Java applet for the web user-interface and then stream the audio back with Shoutcast. The users would listen to the music through Itunes, WinAmp or some similar program. This of course introduced quite a delay before they could actually hear their changes to the music. But that was not all too bad as we had designed the original iStreet in a way that latency was part of the user experience :-)
(The commands sent there via SMS took approx 30 seconds to reach us from the Vodafone server and that could not be sped up. The users were given no instant control over the music - rather they could nudge it in some direction with their commands.)
So our internet radio/Shoutcast solution worked just fine and we had it up and running for a short while from Mark's house in London.

That was a total homebrewn solution of course. We wanted it to handle a lot of visitors, deal with some hopefully high traffic, be more stable, permanent and not run on an ADSL connection.
So at UoW we got access to a osx server cluster and I started to plan how to install SuperCollider, a webserver and iStreet on that. Little did I know about servers, network and security and I had to learn SSH and Emacs to get somewhere. Rob Saunders helped me a lot here.

Then there were some major obstacles. First of all the cluster didn't have any window manager installed - not even X11. I spent many days getting SuperCollider and Stefan Kersten's Emacs interface for sclang called scel to compile.
We also had some minor issues with starting the webserver and punching hole in the university firewall etc. but the major problem turned out to be to get the audio streaming going. I didn't have root access and wasn't allowed to install jack on the cluster. To stream I needed a Shoutcast client and some way to get audio to that from SuperCollider. I did find osx programs that could have worked but none would run windowless on the console. So stuck.

The only solution was to write my own streaming mechanism. The resulting SuperCollider class for segmenting audio into MP3s is here. A Java gateway handled the communication between SuperCollider and the Java applet that would stitch these files back together. (The Java gateway program also distributed all the other network data like the chat, checking online users, pending/playing commands etc. It used NetUtil by sciss).

Unfortunately I never got the streaming thing to run smoothly. Nasty hickups in the sound made it impossible to listen to. The hickups were probably partly due to my crappy coding but I think the main error was in the ESS library for Processing. Either ESS (releases 1 and 2) can't do asynchronous loading or Java is just too slow to load MP3s without dropping audio playback. Very annoying.
After that defeat I also spent time with flash and did a little player there that could load and play back MP3s smoothly. With the help from my flash expert friend Abe we also could talk to the flash thing from my Java applet via JavaScript. But time ran out and this would have been a too complicated system anyway.

So the iStreet never made it online. But again I learned a lot about networks, Unix, Java and some tools got developed in the process. RedGUI - a set of user-interface classes for processing, ISRecord and ISGateway for SuperCollider, and the

Screenshot of the Java applet running iStreet online...

work with mark: istreet - recording mp3s for streaming

Mark d'Inverno wanted to see the Intelligent Street installation gain new live online. So for streaming the sound from iStreet over internet, I wrote a class for SuperCollider called ISRecord. It basically records sound to disk in small MP3 segments. So any sound SuperCollider produces will be spliced into many short MP3 files that could later be sent as small packages over the internet.
The technique is to continuously save the sound into one of two buffers. When one buffer is filled, the recording swaps and continues in the other. The buffer that just got filled is saved to disk and conversion to MP3 is started. This swap-and-write-to-disk cycle should have no problems keeping up realtime recording. But as the MP3 conversion takes a little bit of extra time - depending on quality and segmentation size etc, there is a callback message from the MP3 converter that evaluates a user defined segAction function when the conversion is finished. Thereby one can notify other programs when the MP3 file is ready to be used.
There is also a cycle parameter that controls how many MP3 segments to save before starting to overwrite earlier ones. This is needed to not totally flood the harddrive with MP3s.

The actual MP3 conversion is done using lame and cnmat's sendOSC is also needed for the lame-to-sc communication.
Attached is the recorder class plus a helpfile.

Package icon ISrecord060920.zip5.81 KB

work with mark: istreet - osx

At UoW I also rewrote an old installation called the Intelligent Street. Mark d'Inverno and I had worked together on this one earlier and we now wanted it to run on a modern computer (osx). We also wanted to redesign it a bit and make it into a standalone application.

The original Intelligent Street was a transnational sound installation were users could compose music using mobile phones and SMS. That in turn was an extended version of a yet older installation called 'The Street' by John Eacott. This totally reworked 'intelligent' version was premiered in November 2003 and realised as a joined effort between the Ambigence group (j.eacott, m.d'inverno, h.lörstad, f.rougier, f.olofsson) and the Sonic studio at the Interactive Institute in Piteå (way up in northern Sweden).

For the new osx version we dropped SMS as the only user interface and also removed the direct video+sound links between UK-SE that was part of the old setup. Except for that the plan was to move it straight over to SuperCollider server (SC3) and rather spend time on polishing the overall sound and music.

I roughly estimated it would take just a few days to do the actual port. Most of the old code was written in SuperCollider version 2 (mac OS9) and the generative music parts were done using SC2's patterns and Crucial libraries. So that code, I thought, would be pretty much forward compatible with our targeted SuperCollider 3. But sigh - it turned out that I had to rewrite it completely from scratch. The 'smart' tweaks and optimisations I had done in the old SC2 version in combination with the complexity of the engine made it necessary to redesign the thing bottom up. Even the generative patterns parts. Last I also dropped Crucial library for the synthesised instruments and did it all in bare-bone SC3.

But I guess it was worth the extra weeks of work. In the end the system as a whole became more robust and better sounding. And standalone not to mention so hopefully it will survive a few years longer.
But I can also think of more creative work than rewriting old code. I've been doing that quite a lot recently. It feels like these old installations I've worked on earlier comes back to haunt me at regular intervals. And there is more and more of them for each year :-)

Proof: Intelligent Street running happily under osx...

work with mark: shadowplay

Yet another system Mark d'Inverno and I worked on but never finished had the working title 'shadowplay'. We had this idea about an audiovisual installation where people's limbs (or outlines of bodies) would represent grid worlds. Agents would live in these worlds and evolve differently depending on things like limb size, limb movement over time, limb shape and limb position. The agents would make different music/sounds depending on the world they live in. A limb world could be thought of as a musical part in a score. The worlds would sound simultaneously but panned to different speakers to help interaction.
The visitors would see the outline of their bodies projected on a big screen together with the agents represented visually in this picture as tiny dots. Hopefully people could then hear the agents that got caught or breed inside their own limbs. We hoped to active a very direct feeling of caressing and breeding your own sounding agents.
There were plans for multi user interaction: if different limbs/outlines touched (e.g. users shaking hands), agents could migrate from one world to another. There they would inject new genes in the population, inflicting the sound, maybe die or take over totally. Though to keep agents within the worlds they were made to bounce off outlines. But one could shake off agents by moving quickly or just leave the area. These 'lost' agents would then starve to death if not adopted by other users.

The whole thing was written in Processing and SuperCollider. Processing did the video and graphics: getting the DV input stream, doing blobtracking (using the 3rd party library blobdetection) and drawing the agents and lines for the limbs. SuperCollider handled rest: the sound synthesis, the genetics, agent state and behaviours, keeping track of the worlds etc etc. We used a slightly modified version of our A4 agent framework I wrote about here.
The two programs communicated via network (OSC) and would ideally run on different machines.
I had major problems with programming. The math was hairy and all the features were very taxing on the CPU. We never got further than a rough implementation.


Subscribe to RSS - research