Rough Stuff & Research
The first three months of the year threw me for a bit of a doozy. I was able to secure a contract gig for the summer (yay) involving some work in Unity (oh.) so I decided to upend the dev priorities to facilitate brushing off my C# / Unity skills.
I made a sample room using assets from the Victorian Interiors and Sally’s Country Home asset packs (just to get my feet under me for working in 3D) and a simple room generator (though no decoration). The room generator is embarrassingly buggy so I won’t link it, but if you want to see the first 3D thing I’ve ever made in Unity and have a Mac, knock yourself out.
Most of the rest of the work was getting up to speed on pipelines for photogrammetry, 3D stuff in general, and figuring out the development plan for the following quarter.
I have reinforcements for spring quarter! Three people working with me to make room generators, and two working on photogrammetry. We’ll see what we get done, but the rough plan is to have working room generators with room decoration for two asset packs (Victorian Interiors and Village Interiors), and interstitial connecting rooms.
The hope is by the end of the quarter, I’ll have a (terrible, probably buggy) online memory writing environment that, given a
- JSON of the palace generator tag generation space
- JSON of the ontology graph
- JSON of the current memory library
will be able to say “this memory could currently be a rusty lamp sitting in on this particular rug” as you write.
It’s really important to me (maybe misguided?) to make it possible for people to add new memories, change the ontology, or even add new 3D models through packages. But I also want people to be able to do that without knowing all the plumbing, so we need tools.
I’ll admit I don’t understand it fully (I’ll see if Joe has a post / paper I can link) but the memory object verification comes from using visibly pushdown automata to help navigate the possibility space in the intersection of the two grammar libraries: memories and room generation.
I need to arrive at a spec for the room generators soon, but a piece that’s still up in the air (that I’d like to resolve more this quarter) is the interplay between super-rough simulation, and parameterized text / grammars. Simulation to enforce some model of causality, and parameterized text to deliver the surface text of the generated characters’ stories.
Currently the prototype just grabs a bunch of memories (root grammars) and smushes them together for a character. This won’t cut the mustard for causality, especially when those events involve other characters, so I need another layer on top for making the variable outcomes / character qualities consistent.
First, I’ll implement a rough simulation layer, which will work as a generator / selector for events. In turn, events spawn “casting calls” for characters as part of their pre-conditions, and work some kind of effects on said characters. Events will also have state-driven pre-conditions, so if I put in some really basic planning (GOAP?), I’ll hopefully create a system where I can use the simple simulation to generate a skeleton that’s used to pull in grammars, and inform which parameterized text they cash out in.
The filled casting calls and effects for events will then be passed to the memory text system, which will generate the surface text, and likewise the hierarchical metadata necessary to interface with the palace generator. It’s sort of like the casting / quality use in Ice-Bound‘s system, except pushing another layer of procedurality on top (the simulation), using a simple planner to direct it all, then piping that output into another generative system (palace generator).
I like this approach because it silos the simulation away from the text production, which means I can decide separately where I want to turn the dial up on complexity…I can either have really nice, complex, generative surface text to rough events, or more complex, generative events with rough text. Whichever ends up more feasible / expressive / compelling!
I’m worried that will up the authoring complexity / overhead, but it’s all so up in the air currently, it’s hard to even do rudimentary analysis of it. The authoring tool may shed some light on what the requirements analysis would need to be to trigger “laugh hysterically and run”, but until then we’ll just have to make some prototypes and see what sticks.