VERC: Source Machinima Last edited 20 years ago2003-11-19 23:20:00 UTC

You are viewing an older revision of this wiki page. The current revision may be more detailed and up-to-date. Click here to see the current revision of this page.

This article was recovered from an archive and needs to be reviewed

  1. The formatting may be incorrect as it was automatically converted to WikiCode from HTML, it needs to be revised and reformatted
  2. Some information may be out of date as it was written before Half-Life was available on Steam
  3. After the article is re-formatted and updated for Steam HL, remove this notice
  4. Please do not remove the archive notice from the bottom of the article.
  5. Some archive articles are no longer useful, or they duplicate information from other tutorials and entity guides. In this case, delete the page after merging any relevant information into other pages. Contact an admin to delete a page.
This is a short interview with Yahn Bernier discussing some of the aspects of the Source engine in relation to machinima.

What machinima creation and editing tools will Valve provide? i.e. will there be a cinematics editor? built-in editing capability?

Source includes a variety of tools for doing this kind of work. The first is faceposer, which is a tool we use to sequence our actors. It includes functionality for specifying animation, including detailed timing curves for each of the facial muscle controls. It also contains functionality for preprocessing .wav files to extract the relevant phonemes/visemes. These can be hand tweaked in the tool using a graphical UI. The phonemes again map to control of facial muscles over time. In addition, face poser contains tools to do a lot of scene blocking events, such as moving actors to their marks, specifying what objects or actors they are to face or look at, etc.

The second set of tools we have are related to our demo recording and playback. The engine supports recording a demo and playing it back at a fixed framerate at any supported resolution. The frames are output as .tga or .bmp files which can then be imported into Premiere or another package to create a movie in whatever format you want. The fixed framerate playback outputs a new .wav file with exactly matched sound, so you can get a high quality movie even if recording it to disk doesn't occur at realtime framerates.

Previewing is as simple as loading the engine and playing back your recorded demo of the actors.

We have some control in the AI for scriptable cameras, but we have another tool which allows post-processing of the camera track in the recorded demo. This track is re-output to the demo file and when you're happy with all of that, you just record the frames out to disk as noted above. We built a bunch of rudimentary tools in there for splining the camera and camera target tracks, changing the FOV/zoom, etc.

Finally, our demo files support an optional "metafile" which can specify events or actions to play at arbitrary frame #s during demo playback. We use that for titling, fadein/fadeout, queuing additional sounds, etc. Obviously you could do a lot of this kind of work in Premiere.

What sort of utility will Valve be providing for the creation and manipulation of scripts? Will there be a user interface or any sort of aid in the process?

Our tools allow commanding the various AIs to move to their marks, so you wouldn't need humans steering your actors. The faceposer application specifies the timings of when actors move to marks, etc.

Will there be an interface that allows you to immediately view the results of actors lipsynching as we record?

We don't have a realtime tool for this (it's something we may get to for doing lipsync of multiplayer chat, but no promises). We use faceposer to preprocess our .wav files and embed the timing info for driving the mouth facial expressions. Note that we also encode close captioning data with the same tool (we support Unicode/non-Western fonts for captioning).

Are facial animations separate from animations of the whole body?

Yes, you can animate a single facial muscle or overlay a body gesture onto an actor independent of any other activity the actor is currently undertaking. You get lots of control.

What formats will Source/Hammer support as outputs for videos we create?

The output is just intermediate still frames, so you can spit them out at any resolution/framerate you want (depending on how patient you are I suppose).

Will the engine support a preview feature -- something that allows you to see the general flow of what you're animating before you set it to render at high resolution?

Yes, just playing the demo of the scene in the engine will show you what it will look like when recorded to higher resolution frames.

Thanks Yahn!
This article was originally published on the Valve Editing Resource Collective (VERC).
TWHL only archives articles from defunct websites. For more information on TWHL's archiving efforts, please visit the TWHL Archiving Project page.

Comments

You must log in to post a comment. You can login or register a new account.