User Tools

Site Tools


newsletters:2024-08

August 2024 newsletter

(If you enjoy following our progress and want us to continue, definitely consider sponsoring Folk on GitHub Sponsors.)

Some links, especially if you're new to Folk:

What we've been up to

Applications and demos

  • WIP: Omar has been working on a iPhone→Folk scroll demo which should be a really nice way to open imagination about how to use the system w/ traditional computers
    • random video which isn't how it'll look when it's done:
    • (isn't close to finished yet, may even require new evaluator because it feels like latency and frame drops degrade the experience a lot, you expect it to keep up with iPhone scrolling. good push for performance though)
  • Daniel Pipkin has been looking at using depth camera (Kinect + libfreenect2) to implement touch detection

Friends and outreach

  • Our open house on August 18 had snacks (thanks to Victoria), new animation programs for 3D calibration, ran our old colleague Ian Clester's music programs from last year on the downstairs system, and had good conversations about programming, the future of computing, and architecture + computing.
    • omar_gadget_glow.jpeg 20240903-103554.jpeg
  • Omar visited our friend Ashwin Agarwal at Recurse Center, worked a bit on the Folk system there that Jessie Grosen had set up last month (testing 3D calibration on that setup)
  • Omar: was in San Francisco at the end of August (talking about a side project at !!con) – spent a lot of time showing off the handheld Folk gadget while I was there, talking about some of the possibilities, coming up with a reasonably cool demo set that can be done with laptop + current gadget
    • Was nice to be able to work a bit remotely!
    • 20240903-235502.jpeg 20240903-235521.jpeg 20240903-235537.jpeg
  • August 1: Ariana Martinez visited, and our friend Tinnei Pang's friend Ben Zweig visited:
    • ben_visit.jpg 20240902-011446.jpeg 20240902-011545.jpeg 20240903-102149.jpeg
  • August 6: Azlen Elza & his friend Carmen visited – made precision die, drew stuff
    • 20240902-011916.jpeg 20240902-012046.jpeg
  • August 15: Kariina Altosaar visited, and we talked about how she might help out with the project and get a Folk machine set up (she'll borrow one of the Chromebook systems for now)
    • kariina_doodle_1-medium.jpeg kariina_doodle_2-medium.jpeg 20240902-012517.jpeg
    • Omar had just brought the gadget into Hex House that day, so we did an impromptu demo:
      • kariina_gadget-medium.jpeg
  • August 15: Gwen Brinsmead and Colton Pierson visited – chatted about custom keyboards, agents, tangible computing
    • august_16_gwen_colton_visit-medium.jpeg
  • Andrés figured out how to make Folk stickers (come to an open house if you'd like one)

Fractal

  • August 6: Andrés visited Fractal University to scout out their plans for installing their own Folk system:
    • fractal_plans_2-medium.jpeg fractal_plans_1-medium.jpeg
    • then Patrick and Josh visited Folk for an hour and we chatted about Josh potentially contributing to Folk

Upstate Carolina Linux Users Group

  • The Upstate Carolina Linux Users Group (UCLUG) set up Folk in a makerspace. They streamed their last meeting where one of the members presented about Briar and their Folk computer setup — here's the video at the relevant timecode (the whole video is an hour long)
    • They have a keyboard editor working and gave a nice little demonstration of writing a Folk program (to make a dial that changes the radius of an orange circle) using it. It's exciting to have a group that we've never talked to download Folk and set it up themselves. Looking forward to many more people doing this in the coming months!
    • Some screenshots of their setup (note: their vertical mounting solution is a ladder :) ):
      • ASCII���Screenshot ASCII���Screenshot ASCII���Screenshot ASCII���Screenshot

System improvements

  • Merged Andrés's change from ''Commit'' to ''Hold'' in the basic Folk syntax, which feels more appropriate to what it does
    • Shows deprecated warning on programs that still use Commit
    • Also changed 3D calibration code accordingly
  • Daniel and Andrés made some hotfixes because Ctrl-S and Ctrl-P were acting up on table keyboards
  • Omar: Use 4 instead of 6 vertices in vkCmdDraw, which fixes glitching on Pi gadget (and probably other random GPUs)
    • (Omar: I've seen other ESC/POS printers around, so this should come in handy in general.)
    • Daniel's pull request is built on 3D calibration and provides geometry for the printed receipts and tags (since they're different sizes than our normal half-letter programs), so they should projection-map correctly.
    • Should be able to merge soon (it was dependent on 3D calibration getting merged), just figuring out some edge cases with editing/overwriting existing programs

Documentation

  • Andrés: I worked on Folk's capability to generate its own documentation automatically. This involved a combination of our local server programming (e.g. When /someone/ wishes to serve ...) and using !Ruff to generate static documentation from our codebase's comments. Going forward these will lower the barrier to writing documentation for Folk and demystify the code base for new contributors.

Desksaver

Other

  • Omar made a 'watchdog' that restarts Folk if free RAM drops below 100MB
    • This workaround has been long overdue (we have a memory leak but haven't had time to properly fix) and should help prevent Pi & Chromebook systems from completely getting frozen and cut off where you need to reboot them
  • Omar and Daniel spent time trying to debug Wi-Fi issue (segfault, spaces in network name, on live USB?) with no luck
    • 20240902-011722.jpeg
  • Fixed Kevin Kwok's e-ink crank device (special thanks to Jesse Li and his friend Alex for helping to pry off the old crank)

Handheld gadget

Omar: An inventory of some of the gadget ideas I've been throwing around for the last few weeks as I've demoed it to people (some are general Folk ideas also, but they feel more vital and mobile with the handheld gadget):

  • Trigger button to 'scan' QR code (or whatever other computer-readable thing) in environment, project content (linked PDF) next to it
    • notice how other people can see it, it's more social, it's not just on your phone
  • How to connect gadget to local Wi-Fi network? You carry a laminated card with fields for SSID and password + a fiducial marker, you fill out the fields with a pen, then scan the card with the Folk gadget to connect
  • Fuller handwriting interaction where you write in a notebook and the gadget projects responses, you get like a 'REPL' or conversational interaction (you can program this way, or ask questions, or run commands)
  • Receipt printer + gadget let you carry a complete system in a lunchbox and pull it out and cover a whole table with computational objects
  • Segment parts of the world and projection-map them with colors or patterns automatically
  • Highlight text in a book or printout that the gadget is pointed to
  • Point gadget at your laptop screen, hold trigger, and drag objects (files, tabs, windows, photos, videos) out of your laptop and onto a Folk table as virtual objects (or into a receipt printer to make them physical objects)
  • Gadget as flashlight that shows a 'spotlight' information augmenting the world at the world's scale, not at all like a screen (this requires spatial tracking to know how far the gadget is from the surface, so fiducials and/or world tracking, which is exactly what 3D calibration is meant to enable)
    • want more brightness or higher pixel density? just move the gadget closer!

We need to make a lot of progress on Folk and on the gadget design to be able to build these, but it's very exciting, and it does feel like the ideas are generated by having the gadget in hand. I think it's been well worth the effort of building it.

Playbook

Over the course of my week in San Francisco, I kind of developed a playbook of quick virtual demos that show some programmability and dynamism on the Folk gadget (although not as good as a full tabletop system with physical programs yet, obviously).

It goes something like:

  1. Set up Wi-Fi on laptop, plug gadget in with huge extension cord: red calibration error appears on projection, so you know it's on
  2. Go to http://gadget-blue.local:4273/new; program Wish $this is outlined green, and green rectangle shows on wall/table, bright enough to be visible
  3. Program Wish $this is labelled "Hello!" (or the person's name), text shows in the green rectangle
  4. Program scaled label (we should add sugar so this is less of a jump from previous): shows live programmability
    When $this has region /r/ & the clock time is /t/ {
      lassign [region centroid $r] x y
      Wish to draw text with text "Hello" x $x y $y scale 3
    }
  5. Program animating scaled label: shows movement, not a static image anymore
    When $this has region /r/ & the clock time is /t/ {
      lassign [region centroid $r] x y
      Wish to draw text with text "Hello" x $x y $y scale [expr {abs(sin($t))*5.0}]
    }
  6. Program camera feedback: shows that camera is active, gadget interacts with the outside world and isn't just a screen
    When camera /any/ has frame /f/ at timestamp /ts/ {
      Wish $this displays image $f
    }
  7. Calibrate on http://gadget-blue.local:4273/calibrate (I carried a mini foldable cardboard calibration board with me with 14.5mm tags): shows the role of fiducial markers, shows how the gadget can be more like a 'flashlight' overlaying on the environment where it always projects at the same physical size

Hardware

Omar: I upgraded the gadget to use a Pi 5, which lets us power the projector from the Pi, since the Pi 5 can now provide up to 1.6A at 5V from its USB ports (so there's only one overall power cable needed).

And I got shorter cables with right angles and/or ribbons, which save space and remove strain (from cables being too long) that was causing a lot of random instability/crashes (HDMI and power would jostle out of the projector and/or Pi before, I think).

20240902-011823.jpeg 20240902-011804.jpeg 20240902-012126.jpeg 20240902-012244.jpeg

Along with the current revision of the 3D-printed chassis, the gadget feels pretty stable in terms of hardware now, and it doesn't randomly crash (I think this is also thanks to the RAM watchdog), and the cabling is much simpler from the outside (just one power cable out), and I've been able to show it off many times in New York and SF with few problems. (It also helps that it always boots into the red calibration warning now that 3D calibration is merged, so it's easy to tell if it's up.)

It all packs into a box: Folk gadget, hand grip, power brick, receipt printer. With just this box, you should eventually be able to activate an entire table with printed programs and point your gadget to run them:

I want that box to kind of be the whole 'computer', maybe with a pen and a notebook/some index cards. You put that in your bag, no laptop needed.

I've been taking it around and coworking from places:

20240902-012601.jpeg 20240902-012654.jpeg 20240902-012715.jpeg

You can even mount the Folk gadget on other things besides the handheld grip. Here, it's mounted on top of a desk we had lying around, and pointed down at a book (we didn't have to plan this, we just saw the parts and tried it, and we had the lamp that we've been talking about for ages!):

20240902-012444.jpeg gadget_stand_test-medium-2.jpeg

Software

The big gadget issue right now is that 3D calibration doesn't work (it won't accept any pose, RMSE too high and/or not enough tags detected):

and I don't have any physical programs for the gadget anyway, so it's all projected virtual programs that I'm casting from my laptop. Need to get the calibration process to complete and need to connect the cat printer.

A few different Pi-5-specific (and/or portable-gadget-specific) software issues:

Pass 4 vertices to VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP

This was also mentioned under misc. system improvements – initially flagged by s-ol bekic in Discord a few weeks ago but appeared really strongly in checkerboard calibration on the Pi 5 gadget. Weird glitching when rendering calibration tags onto the checkerboard.

Realized we need to pass 4 vertices, not 6, to vkCmdDraw to make it work on a bunch of GPUs (why were we passing 6 before? was it for the Chromebook GPUs or something? need to test)

Descriptor indexing

On the software side, the main bug I fixed last month was this bug with garbled text rendering. This is supposed to be the red Folk warning that calibration isn't in place, but the glyphs are all wrong:

ffdafdsdafs.jpg

It turned out that images were generally broken on Pi other than image 0, and that included the font atlas image for most fonts.

This is because the Vulkan driver for the Pi (v3dv) does not support dynamic descriptor indexing, even if the index uniform across invocations, which means we can't pass images as integers into an array (they all silently get sent to 0, which is generally the atlas of the first font loaded, thus the garbled text). Annoying that there's no validation warning for this (and that blog posts about descriptor indexing claimed that support is near-universal, which is wrong).

I came up with a terrible hack that works pretty well, inspired by some of the issue discussion: on the Pi, just emit an if-ladder so that the indexes are static at the end. We only have max of 16 concurrent images by default anyway. Text works on the gadget now!

Wi-Fi

The Wi-Fi situation is still annoying on the gadget, and this is a problem since it's the only way to get any input or programs to the gadget right now (since it doesn't have projection mapping or any printed programs or a keyboard/editor on boot).

I've been carrying the gadget with my laptop, making a network with my home Wi-Fi SSID+password on my laptop, and then turning on the gadget (it connects to that Wi-Fi automatically) and programming it from the Web on my laptop. This works pretty well but is clunky (and I need my laptop, and I need to remember to pull it out first before the gadget, and my laptop doesn't have internet while it's doing this.)

I kept trying to reconfigure the gadget to connect to various local Wi-Fi networks (like one at UC Santa Cruz) and then failing & losing contact with it, which would mean I had no way to fix the Wi-Fi again or get Folk to do anything until I borrowed someone's USB keyboard and reconfigured nmtui back to use my home Wi-Fi. All kinds of weird hand-built keyboards that I quickly had to learn, plus racing Folk to shut it down before it locks the keyboard again.

(Ideally, we'd use Bluetooth Low Energy so you don't need to get the Pi 5 on Wi-Fi at all and can just control it over BLE from any other Folk system or phone or laptop in the same room.)

Pi camera

I've been working on the camera (I think it's a Pi camera module 2, but all the Pi camera modules work the same way more or less) to try to get calibration to work.

I want to switch from using libcamerify+v4l2 (which gives me very little control, in particular to adjust exposure, which is often an issue during calibration) to using libcamera directly. So I added C++ support to Folk, which took like an hour. And it's not too bad to use:

(C++ is not great, but it's the only way to talk to a lot of libraries. OpenCV, libcamera, LLVM, libfreenect2 that Daniel is now trying)

Now working on calling libcamera as a new Pi camera backend – I have the stub but can't get it to give me any images yet.

Next steps

Upcoming work for the gadget includes:

  • Make 3D calibration work: will let us actually projection-map programs and other surfaces
    • Exposure setting – libcamera transition. (C++)
    • More lenient or settable RMSE threshold for pose?
    • Integrate multiple frames so you have enough tags to record a pose? (problems with not capturing the scanning projector with camera in a given shot)
  • Trigger button: opens up a lot of point and drag interactions (which we couldn't do with a ceiling system)
  • Projector-camera alignment is sketchy, hard to replicate, drifts, changes based on distance from plane
    • Current situation is putting a few spacers behind half the camera to tilt it up toward the projector area, but even this drifts because the projector drifts, and it feels too dependent on how the screws are tightened, not replicable.
    • 20240902-012829.jpeg
    • Maybe use a wide-angle camera instead? or shift it upward physically?
    • Maybe lock the projector in place better?
    • Side note: maybe multiple cameras for more stable depth perception? cameras are cheap
  • Figure out the Wi-Fi situation, maybe run a hotspot from the gadget automatically for now instead of connecting to Wi-Fi? we don't really use internet from it anyway right now
  • Maybe try to run some AI vision models? to segment environments, identify arbitrary objects by name, etc. opens up new interactions. GGML seems to have a Vulkan backend now, and the Pi 5 does have a Vulkan driver, even if it's not great. Need to spend a few days to actually try

3D calibration

Omar: We merged 3D calibration! This is a huge advance that's been in the works for much of the last year – we're being aggressive by merging, there are still things to fix and it's only roughly on par with the old 2D calibration (better in some cases, worse in others) – but this is why we are pre-alpha and don't promise anything to users yet :-)

(it also means that regions are now deprecated compared to quads)

(3D calibration is also quite important for the portable gadget system, since it has no fixed plane in front of it that we could 2D-calibrate.)

Several small changes before merging:

  • Fixed Jacob Haip's mask-tags which keeps us from projecting on top of tags and potentially breaking them; now works in/based on 3D space
  • RAM watchdog
  • Allow negative distances in quad buffer, implement quad move, fix quad scale
    • with the camera slice fixes, mostly fixes animation program as well once it's been tweaked to use quads

Other than these & table refine below, mostly refinement/recalibration of individual systems this month. Why aren't they as accurate as my home system? (Often the answer is exposure settings, or too-coplanar poses, or some other bit of tacit knowledge about calibration). They aren't too bad now, though:

  • On folk-convivial:
    • 20240902-012341.jpeg
  • On folk-recurse:
    • 20240902-012409.jpeg

Table refinement

Been working on a “table refine” step. This would be something you run after the normal calibration that would (maybe) interactively tighten the camera-projector mapping by looking at how far off tags are projected from expected. (It needs a normal calibration to seed, since otherwise you won't be able to project even close to where you want.)

Half done – I can use the prior calibration to project tags onto the grid, which you can see is way more responsive than normal calibration:

You can also see that there's more error down on the table (the tags aren't well-aligned with the grid), which is what we're hoping this refinement step can correct. It also gives the user control to force better alignment, like on the edges of their table where maybe they didn't get good enough data during initial calibration. That feels useful.

Next step is to read the projected tags & compute error & feed into an optimizer to re-tune the calibration and then do that (loop?) until error is really small. Maybe something with Levenberg-Marquardt in C 'coroutine' with a yield in the middle of it that goes back to Folk and waits until the tags are seen again.

New evaluator

Reactive semantics

Omar: still working out the semantics to both make everything update fast and to retract old statements on time (if you retract too early, you get flickering/blinking). May need to do a little more hinting (use Hold in more of the chain?), or do some tracking of what matters at runtime.

Some random notes from a few weeks ago to give you a sense:

img_1192.jpeg 20240903-234527.jpeg 20240903-234617.jpeg img_1195.jpeg img_1196.jpeg

it's tough to get it right because so much code in old-school Folk implicitly relies on convergence/fixed-point semantics, and you realize that there are a lot of cases where you're typing the same code but you want different behavior. how to distinguish these? some things are very interruptible (just wait for the next thing that comes in) and others are not. some things are very slow and others are not. some things should have 1 lane at a time (or always do in practice) and others should run as parallel as possible (e.g., they run N lanes for N pages).

Scheduler

I got sidetracked by the unreliability of the new evaluator boot process (sometimes it doesn't boot all the way through, sometimes it segfaults, and some of this is that the process pool gets swamped sometimes and nothing gets to run after that). So I've been looking into how to do the scheduler. There are some properties we want out of the new Folk scheduler:

  • Can make a Folk program that:
    • does a Tcl infinite loop without freezing the system
    • does a C infinite loop without freezing the system
    • does blocking I/O (block on network socket, block on user keypress, block on camera frame read 30/60fps) without freezing the system
    • (without needing to explicitly fence your code out into a subprocess with Start process/On process – Folk should automatically manage the OS processes)
      • (this means that we should have a pool of OS processes and spin up new ones to handle tasks as we detect that existing pool processes are blocked on either compute or I/O)
  • Avoid explicit hinting as much as possible – you write straight-line imperative code in a Folk program and just block on stuff (so it's hard to crash Folk by naively writing normal programs. this is how a real OS works!)
  • Don't have too many processes awake and looking for new tasks at once (you want roughly NCPUS such processes since that's the most tasks you can take on at once on real hardware anyway)
  • Don't have too few processes awake and looking for new tasks, which is a 'livelock' situation where Folk freezes (all living processes are in loops where they're reading the camera or running the web server or whatever, so there's no one free to actually run new programs)

Note that we're willing to sacrifice ideal throughput (interrupt C/Tcl processes every few milliseconds or even microseconds, etc) to get these properties. Responsiveness is more important to us than making long-running processes complete as fast as possible.

The hack I came up with to maintain all these properties is to have an outside monitor process that just checks every few milliseconds and rebalances stuff (makes new processes, kills extra processes).

I looked at Erlang's ("dirty") scheduler and Go's scheduler for inspiration after struggling with this a bit. (a lot of my attempts were unpredictable or would get into terminal too-few-processes state after killing stuff)

Erlang is weird because of all the hinting it requires. Erlang programmers aren't normally directly writing and calling C functions, so they treat it as an exception that you mark up if you expect unusual blocking.

Go actually seems very close to what we want. It's a work-stealing scheduler that can deal with arbitrary CPU blocks and I/O blocks in a goroutine. It has a little bit of hinting to know about long-running work in advance (it can detect syscalls, since they all go through the Go runtime), but it also has something called 'sysmon' which is basically my hacky outside monitor process proposal, so it can adapt to arbitrary blocking like we want.

What we'll be up to in September

  • Our next Folk open house is on the night of Saturday, September 28, 7-9PM, at our studio in East Williamsburg, Brooklyn.
    • Possibly do a workshop to get people familiar with making stuff in the system beforehand
  • 3D calibration follow-up work
    • Make the web editor emit quads so you can do quad-dependent stuff with web-edited programs?
    • Work on table refinement to try to improve accuracy on folk-convivial and folk0
    • Exposure adjustment and pose view UI for /calibrate?
  • Gadget improvement: main thing is ability to actually track and projection-map physical programs
    • Align projector and camera better
    • Improve exposure to be able to 3D-calibrate gadget
      • Need to finish libcamera camera backend to support Pi camera exposure adjustment
    • Use 2 cameras?
    • Trigger button
    • Cat printer support
    • Put design online?
  • Merge Daniel's receipt printer support
  • Daniel hacking on touch detection a bit
  • Omar: I hope we can start working on some of the stuff with precise camera slices from the new calibration: OCRing fields, checkboxes, stuff like that
  • New evaluator: new scheduler, work on evaluation semantics, eventually resume porting the editor

Omar

Andrés

newsletters/2024-08.txt · Last modified: 2024/09/04 02:36 by osnr

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki