VERC: Half-Life 2 and Source Mod FAQ Last edited 2 years ago2019-04-19 09:58:33 UTC

General

Can I have the HL2 SDK now?

Not yet. We are working on it now and will release it as soon as we think it is ready for public use. We understand that there is a great deal of anticipation from the community, and we want to put the necessary time and resources into the SDK so it is done right.

We are not distributing the SDK prior to its general public release.

When will the SDK be available?

We have no firm date on this yet, but will provide more information as soon as it is available.

Will there be a modified German version of Half-Life 2, like there was for Half-Life 1?

We'll be working closely with the German censor board to make sure we can be distributed throughout Germany.

A system of object maintenance was designed, and called as the "Warehouse". Will Valve offer a free service for object uploading/ downloading, to maintain "Warehouses"?

A service like this may be offered as part of the Valve Editing Resource Center. Theoretically, we would allow people from the community to upload their prop models that would then be reviewed by a number of editors before being accepted to the site (to maintain a certain level of quality).

The prop models used in Half-Life 2 will also be available for use by the mod community.

My question wasn't answered here, what should I do?

If you have a question related to Half-Life 2 modding or the Source SDK, you can send email to sdk@valvesoftware.com.

Mapping

Is mapping for Half-Life 2 similar to mapping for Half-Life 1?

Map editing for the Source engine will be very familiar to those who did mapping for Half-Life. We've always tried to extend rather than reinvent whenever possible. The main challenge will be digesting all the new tools and capabilities to get the most out of them.

The rough structure of the map is still BSP-based, built from brushes, but the details are fleshed out with more props (models) that are built in XSI, Maya, or Max.

World bounds are +/- 16384 units in all directions (Half-Life's world bounds were +/- 4096). Mods can choose their own unit scale for the world. For example, 1 unit could be made to be equal to 1 foot, or to 1/10 of an inch. Physics will be adjusted accordingly, but collisions are only guaranteed to be accurate to 1/32nd of a unit.

We have found that a large part of the cost of creating a Half-Life 2 quality level is in the art production process. Building all the props and textures required for the level of visual quality people saw at E3 consumes a lot of man-hours. As for building a Half-Life 1 quality map in the Source engine, the new tools and entity I/O makes it go much faster than it did on Half-Life 1.

What is the production polygon count per scene for Half-Life 2 maps?

Rather than a polygon count, we aim at a specific performance level on high-, medium-, and low-end machines. The map designer has tools (similar to Half-Life's r_speeds output, but greatly expanded) that will help pinpoint areas that need to be optimized.

What is the maximum map size for Half-Life 2?

The maximum map size is currently set at +/-16384 units (16x the horizontal area of Half-Life 1, 64x overall volume).
User posted image
Mods can choose their own unit scale for the world. For example, 1 unit could be made to be equal to 1 foot, or to 1/10 of an inch. Physics will be adjusted accordingly, but collisions are only guaranteed to be accurate to 1/32nd of a unit.

What is the ratio of map units to real-life units?

Unlike in Half-Life 1, things in Half-Life 2 will look proper when created at a scale of 1:1 (1 unit = 1 inch). It will be much easier to reproduce real-world environments in Half-Life 2. Gordon currently still requires 33 units of clearance to pass through a space (though this also may be subject to change).

What are some of the key new features of the Hammer editor?

Some of the new features in Hammer are: entity I/O, displacement surfaces, hierarchical visgroups, Half-Life 1 -> Half-Life 2 conversion path, per-face luxel density with preview, smoothing groups for brushes, graphical/editing helpers: radius, lightcone, line, text based, and extensible VMF file (no more RMF versioning!).

Can you explain what the new Entity I/O system is?

Entity I/O is the way entities are connected in the Source engine. It's essentially an inter-object messaging system. When certain internal events occur, an entity can fire outputs, which can be connected to inputs of other entities.

The output can potentially pass data to the receiving input, for example a wheel can pass its Position as a value from 0 - 1 to the Alpha input of a sprite, so as you turn the wheel the sprite gets brighter.

A given entity can have many different outputs, for example doors can fire an output when they are Opened, Closed, BlockedOpening, BlockedClosing, etc. which can be connected to any other entities in your map. The level designer controls all of this from Hammer, so you have a lot more power for building things.

Can entities be linked to each other? For example, a breakable window on a door?

Yes, we use entity hierarchy for this. Hierarchy is a way of attaching entities to each other so that they move together despite being of different entity types. For example, you can have a func_rotating in hierarchy with a func_tracktrain to give the train a rotating part, or you can parent a breakable windowpane to the train, or you can attach a camera to a vehicle.

Have the compile tools undergone any changes?

We've created distributed visibility (vvis) and radiosity (vrad) tools to greatly cut compile times. Distributed tools can harness the power of multiple computers (on a LAN, for instance) to increase their processing power.

What is displacement editing like?

At a simple level it's painting tools similar to Maya's Artisan. You create a brush, and can tessellate any four-sided face or faces to up to 17x17 verts. Then you use a variety of tools to perturb the verts in three dimensions (it's not just a height field). You can build caves, domes, curves, and Hammer will automatically stitch the edges together.

Can the behavior of triggers be extended?

Yes - we use filters to do this. Filters are a way of extending the rules for things like when triggers fire or what can damage breakable objects. For example, I can add a classname filter to a trigger_once and make it only fire when touched by npc_headcrab. I can also make a breakable object only break when damaged by a grenade.

What are overlays and how do you use them?

Overlays are a natural extension of decals. They aren't projected but live within the surface. You can vertex manipulate the corner verts, so they are totally orientable. Texturing is 1:1 and wrappable as opposed to projected.

Modeling

What is the polygon budget for models?

We target between 3000 polygons (i.e.: headcrabs) and 7500 polygons (i.e. Alyx, the Gman) for characters and monsters, depending on function and how many we hope to have on-screen simultaneously, and we have several stages of LOD (level of detail) models with drastically reduced polygon counts for when things get smaller in screen space (further away).

We're targeting around 2000 polygons for our viewmodels (which of course do not LOD) including hands. Some current examples (final values are likely to be different):

v_crossbow.mdl: 592
v_smg2.mdl: 2854
v_smg2.mdl: 1851
v_bugbait: 1958
v_rocketlauncher: 1791
v_physics: 2298
v_shotgun: 1925

How many vertices can I have per polygon, on a model?

The engine deals with triangles (which means 3 vertices per polygon) but our exporters from XSI divide all polygons with greater than 3 vertices into triangles.

Is XSI mandatory for modeling?

It isn't mandatory, but Valve uses XSI internally for almost all modeling. We currently have exporters for XSI, 3DSMax and Maya. We currently have no plans to ship other exporters, but will provide the necessary resources for people wishing to create their own.

What was the polycount on the driveable buggy?

Currently it's a bit heavy at around 12000 triangles. It will be reduced before the game is done.

We've been told that the Source engine does not support dynamic model mesh LOD. Is there a particular number of static meshes with varying poly counts required or are we able to say, choose 8 levels of detail or even 1 if we didn't want any LOD to occur?

LODs are defined in the model's .QC file by referring to a number of static meshes based on distances. So yes, if you desired 8 levels, you could do so, or retain your full poly count by not including any LODs in the .QC. In most cases, we shoot for 3 or 4.

Is there going to be a version of XSI distributed with the Half-Life 2 SDK?

Yes. We are working very closely with SoftImage to make the integration as tight as possible. This includes distribution of a free version of XSI. More details on that will be released over the coming weeks.

Technical

What language will the Source SDK be in?

The Source SDK will be similar to the existing Half-Life SDK in that it will be C++ based, not using a proprietary scripting language. Programmers familiar with the Half-Life SDK source code will be comfortable immediately with the Half-Life 2 SDK source code.

What tool is used for designing Half-Life 2 levels?

A new and improved version of the Valve Hammer Editor is used for Half-Life 2 level design. Designers who have used previous versions of Hammer will find using the new Hammer to be a very similar experience. The interface of the program has stayed relatively the same while several new tools have been added.

Will the map compile tools support any type of distributed compiling, similar to Zoner's netvis? Alternatively, will they use hyperthreading?

Yes. Both the vis and radiosity tools currently support distributed operation. However, distributed computing is harder than hyperthreading but it has the potential to increase performance by a huge amount (10x on our tools) as opposed to hyperthreading (30%). As we move forward, we anticipate that even more of our tools are going to take a distributed approach.

The distributed tools require LAN speeds and a network that supports multicasting. The distributed tools were fairly fast on our old 10 megabit network, and got a lot faster on our new gigabit network (the peak throughput I've seen is about 360 megabit). Performance of these tools is scalable against the number of distributed machines in use.

It's possible that the tools can be adapted to work over DSL-type speeds, but it would require some coding.

What program is being used for Half-Life 2 modeling?

We use XSI. We're working with Softimage to release a free version of XSI over Steam*. We will provide exporters for XSI, 3DS Max, and Maya.

We'll also package up all of the custom Half-Life 2 XSI add-ons and make them freely available to people who are already using XSI.

It will probably be available through some non-Steam delivery method as well.

Can I use XSI to create everything from maps, objects and characters, and then take them into Hammer?

You'll want to create your core map (brush) geometry in Hammer, though you can create significant amounts of geometry in XSI and import them into your maps as static or dynamic props.

We're using XSI primarily for model creation, texturing, animation, and creating normal maps and other textures. Hammer is the tool where the map architecture, props, characters, lighting and gameplay all come together.

How are the proper mouth movements accomplished when a model speaks?

To start with, you'll need to author keyshapes, or morph targets for each of your characters. Our facial animation system uses 34 keyshapes, 14 of which are required for proper lip-sync
animation.

To create the lip-sync animation, you'll use our new character-acting tool, FacePoser. Load your character model into FacePoser, load the audio file into the phoneme editor window, type or paste the dialog from the audio file into the text entry window and the phoneme data will be extracted. The automatic extraction does a good job on clearly enunciated dialog of up to around 5 seconds in duration, though our animators often like to fine-tune the performances by hand afterward. More manual work may be necessary on audio files of longer duration, with poor fidelity, less clear performances or long gaps between dialog.

The lip-sync data is saved into the header of the audio file so that when a character is told to "play" a sound in the engine, they'll move their lips based on the phoneme data encoded in the .WAV file. Additional FacePoser data, such as character acting or direction (look at this character, walk to this target, etc.) is saved in a separate scene file (.vcd).

Does Half-Life 2 use WAD files for textures, like Half-Life 1?

No. You now apply materials to surfaces instead of textures. Materials (VMT files) can refer to a number of textures (bump maps, base textures, environment maps, etc) which start out as TGA files and are converted to textures (VTF files) via a custom tool before you can use them in the game. Textures and materials are not compiled into WAD files anymore. They are stored as individual files in a materials folder. Optionally, they can be stored in a file system inside a ZIP file. You can embed these ZIP files directly into BSPS if you want.

What's the replacement for r_speeds?

There is a new system for budgeting that shows where time is being spent in the code at a high level. We are setting framerate targets for different levels of hardware capability. The display of the budgeting info is in the format of a graphics equalizer-like view of where time is being spent. This gives you a good view of where you are getting spikes, and the average performance.

What format do textures need to be created in for HL2?

Texture source art is now a TGA file (24-bit, or 32-bit with alpha). They have a maximum size of 2048 x 2048*, and the size must be power of 2.

Low end Dx6 hardware will be limited to 256x256 textures. The tools will automatically create these low-detail textures. Alternatively, you can create them yourself and specify in the materials properties that they be used.

How do physics behaviors work in multiplayer?

The simple answer is that there are client-side and server-side physics behaviors. You use client-side when maintaining cross-client coherence isn't important. This cuts down the network traffic while maintaining the appearance of physical simulation throughout the world.

It's definable per-object, so exactly what is client-side and what is server-side is tunable by the designer.

Will Half-Life 2 support MP3s?

We support MP3 playback, but not user selectable music in-game (eg: a UI for setting up playlists). It would be an easy mod.

How does the Valve Anti-Cheat (VAC) technology tie in with Half-Life 2?

It will be part of the multiplayer components.

Does Half-Life 2 support cooperative gameplay?

We aren't doing a cooperative mode, but the mod community has expressed interest in creating this type of gameplay. Sven Coop has expressed great interest in this, and they've already done a great job with Half-Life 1 co-op mode.

We have heard many times that the Source engine makes things easier for mod developers. What things are made easier?

Steam for mod distribution, entity help integrated with Hammer, distributed compile tools, open map file format for extensibility, script-driven sound system.... lots of things make Source better for mod developers.

How different is the console in Source? Is command aliasing still supported?

The console is now a separate VGUI window. We still support "aliasing". One of the cooler features we've added is the ability to do full command completion. We do this with the map command, e.g., where as you type, we scan the Hard Disk looking for available maps. This saves a fair bit of typing :)

We've also collapsed the notion of cvars and commands into a root console primitive, the "ConCommand". Cvars and commands are defined in a single line, so there's no more registration system as well. The whole console autocomplete logic is mod-extensible.

Does Half-Life 2 support dynamic lighting?

Yes.

Will Enemy vehicles be mod'able and if so what type of AI will be used for the flying ones?

Vehicles are mod'able. You can create wheeled vehicles, or flying & hovering vehicles.

Multiple people can be in vehicles. So you can make a multiplayer mod with groups riding in a single vehicle. Each player can have custom control available for things like vehicle movement and onboard weapons.

All of the vehicle code is in the game and client .DLLs, and there's a separate set of script files which mod makers can create and tweak. We allow for NPCs/AIs to drive the vehicles.

We have two different simulation models for vehicles, so you can tradeoff simulation accuracy for performance if you want large numbers of vehicles. The tradeoffs between the two models are the accuracy of the simulation at the wheels. You can trade less accurate wheel collisions/response for less CPU time. You can do several vehicles with the full simulation on our target machines, but you can always use the CPU for something else if the wheel simulation isn't as important to your mod.

A simple tradeoff would be a raycasted wheeled vehicle versus a "real wheel" physically simulated vehicle. Raycasts are about 1/3 to 1/5 the cost CPU-wise of the simulated wheels. The compromises are that raycast wheels don't apply force to physics objects beneath them. So driving over a teeter-totter wouldn't make it tip in the raycast case.

What things affect AI in the Source Engine?

AIs are aware when squad members die.

The AI has lots of interactions with physics. You can control the AI collisions and collision response, navigation around/with physics objects, and specific behaviors involving physics (exerting forces on objects, etc).

Some of Half-Life 2's NPCs use physics as their basis of movement and interaction with the world.

We'll ship all of the Half-Life 2 AI code to mod authors as a reference for creating their own AIs with physics behavior.

How much control do modders have over rendering?

You have access to the material system from the client DLL. You can render anything that is rendered anywhere else in the game. You also will be able to write your own shaders that are accessed by the material system. Without making new shaders, the existing shaders can be tweaked with scripting in VMT (material) files.

You also have full control over the rendering loop and could even insert your own rendering primitives (as long as you ultimately end up rendering triangles...).

Can I put normal maps on everything?

In DX8, you can put specular normal maps on all models, and you can put diffuse and specular normal maps on world geometry (including displacement maps).

In DX9, you can additionally put diffuse normal maps on models.

Are shaders based on PS1.0 or PS2.0? What about old hardware that doesn't have pixel shaders?

Shaders are ps1.1, ps1.4, or ps2.0 depending on what hardware you are running on. You can also code up fixed-function shaders for older hardware if you wish.

What is the networking system like and how flexible is it?

First, you have total control over what entity data is transmitted for a particular entity. In other words, no more set "entity_t" where you had to wedge in fields and override fields.

Second, you can write custom data proxy code to massage data values into more networkable values on the fly.

Third, you have all of the low level prediction code exposed to you in the client .DLL and you have all of the server-side lag compensation code exposed to you in the game .DLL.

Fourth, there are a bunch of useful new diagnostic modes to show you when things mis-predict and help you track down inconsistencies between the client and server versions of things.

One of the cool things we've added to the multiplayer engine is the ability to predict the creation of additional entities, such as projectiles... so you can do a predicted rocket, have it simulate on the client and even do a non-lagged rocket jump in your mod if you wanted to.

Also, entities that go out of the PVS on the client are no longer destroyed and recreated upon re-entry to the PVS. The entities live continuously on the client. In fact, you can create
additional purely client side entities and have them simulate completely locally, too.

If you're familiar with the Half-Life 1 SDK code, there's now a C_BaseEntity on the client that matches the CBaseEntity on the server in almost all ways.

On the performance of the networking code, it's as compressible and much more flexible than the Half-Life 1 code.

How would I do cel shading with Source?

You would write your own vertex/pixel shader combination for a new custom shader and then use that shader in the materials that you want to be cel shaded.

What's the strategy for scalability?

All models are LOD'd discretely. You can make lower-end versions of the models as necessary. Displacement maps are simpler on the low-end. You can also tag entities as only showing up on specific DX-levels in the maps. On the low-end, you get less lightmap resolution. As for shaders, they typically fallback from complex normal-mapped, specular shaders to single pass simple shaders on the low-end.

What's the current state of FSAA?

Anti-aliasing has been fixed on all cards.

Do you use HLSL or Cg for your DX9 shaders?

We use HLSL.

Would it be possible to make a mod similar to Battlefield 1942 with Source?

First, BF1942 has pretty large maps, so you'd probably want to scale your units down to allow for a multiple mile x mile playing field.

Second, Source has pretty flexible vehicle code which allows for players to drive and ride in vehicle and to separately control guns attached to (themselves or) the vehicles.

Third, as for infantry combat, we have a really robust system for prediction, collision detection, and weapons logic (hitscan and projectile and physics based -- all predicted).

Fourth, we have a bunch of UI functionality to do zoomable mini-map overlays, etc. all based on VGUI2.

Fifth, we have some additions to our outdoor rendering, including a real-time occlusion system for dealing with issues unique to outdoor environments. We still maintain a BSP-tree based system for dealing with more traditional indoor environments. So, you can now do extremely large outdoor environments that seamlessly transition indoors.

How does Source simulate fluid?

We simulate objects moving through fluids with surface fluid pressure and buoyancy. We don't simulate the fluid itself, but floating works automatically based on the density and volume of the objects and the fluid (all masses and densities of objects and fluids are scriptable).

Does the enemy AI have some intelligence like firing at explosive barrels when you're near them or throwing objects at you?

Yes. For example, the zombie is aware of objects in the environment. His AI realizes when he can throw an object at you and applies the appropriate physics forces to make it happen. The zombies are not scripted to throw specific barrels; it's all tactical.

The blades cutting zombies in half are physics and AI as well. The zombies have been modeled so they can be cut in half, but other than that they walk into the blades and the physics causes damage and they get split in half.

Will there be a benchmarking routine in Half-Life 2?

Yes. There will be a detailed benchmark released before the game ships where you can run a particular card through any of the DirectX levels where appropriate.

What is the sound system like?

Sounds are all driven by a script file, and sounds are played by label, not by .WAV filename, so once you add the hooks into the game code to play the sound, exactly which WAV gets played, over what channel, and with what probability is all controlled by the script. This is great because the sound guy can iterate on sounds without the help of a programmer. We also Doppler-shifted bullet sounds, which is just cool!

Half-Life 2 supports 5.1, 4.1, and 2 speaker systems. Half-Life 2 supports wave files with 8/16bit, mono/stereo, optional ADPCM compression, and arbitrary sampling rates up to 44.1KHz, and MP3 files as well. We support our own DSP in software. We're working with creative on supporting EAX for that as well.

The sound engine also includes functions for directionality and environmental falloff (atmospheric dB attenuation, etc.). Everything's modeling at real dB levels and all blended together. There's also a whole set of tools for adding dramatic soundscapes and multiple layers of audio - background, action, talking, etc. - that gets blended together cinematically

There's also very fancy stuff like Bullet sounds that are modeled and spatialized as continuous sub-sonic 120db sound sources moving through space.

Do collisions with LOD terrains cause problems in multiplayer?

No, the collision representation is consistent across client and server regardless of client-side LOD.

What are the demo and movie making capabilities of the Source engine?

Right now we support a demo file format that has the flexibility to be post-processed (at least for the camera control)

On demos: we have a nice UI for skipping ahead, doing fast forward, slomo, pause/resume, etc during playback. This has been pretty useful for us for cutting the e3 movies, e.g.

We also have a separate demo metafile editor built into the engine that works on .VDM files (Valve demo metafiles). The metafiles allow queuing up events to occur at certain frames of the demo during playback: screen fade in / out, titling, sound fading, changing playback speed, issuing arbitrary commands, etc.

Finally, we have another built in tool called the "Demo Smoother". This tool allows for completely re-authoring the camera track of a demo as a post-process. The tool allows for placement of a spline for the camera and then computes the correct camera position and orientation based on the spline positions and quaternions.

Another thing, which I think will allow for believable movie performances by actors, is Face Poser, the tool we've created for authoring facial animations.

How are the facial expressions created?

We have a really neat tool called the "Half-Life Face Poser" which allows you to control all of the facial muscles on our humanoids' (could be faces on Alien legs in a mod though...) faces. In particular, you can store set poses as facial expressions or you can animate the face muscles in timing curves. These primitives can be placed together on a timeline to have your actors act out a scene

The faceposer allows for queuing up of the sounds the AIs might say and includes ways to extract the phonemes/visemes from the wav files and to tweak them until they are just right. In addition, actors in scenes have other controls which can be specified in the face poser, such as "look at this other actor", "move to this spot", etc.

How do 3D skyboxes work? Can I animate a skybox?

The 3D skybox is a part of the level at a different scale that the rest of the level. You can place entities in the 3D skybox (like flying objects, etc). There is a "normal" skybox on the outside of the 3D skybox as well. We also have animated cloud layers, and I could see a mod going nuts with this. That would be great!

How can I connect game logic to a shader?

There is new functionality in the client DLL called "Material Proxies". What this means is that you can hook entity-specific code to particular materials. For instance, if you want an entity to get more translucent based on its velocity, this is now trivial. You can also pick different materials to put on the entity based on game state.

How are the server requirements, compared to Half-Life 1?

This is a hard question to answer accurately since we aren't done optimizing. I think the Source server is comparable. We still simulate user input from clients in a similar fashion and we still have to network data to the client as we did before. We've done some spot optimizing to detect changed, fields, etc. etc. I think most of the machines running servers today will continue to.

How easy is it to change the visible properties of objects?

You can use Material Proxies to decide how an object is going to be rendered. If you want to make the object visible through walls, Material Proxies are the place to do this. Different proxies can be used for different vision systems.

Will mods be able to set maxplayers > 32?

Our plan is to allow a mod to go above this on a per-mod basis. No promises yet on that :)

How do you author bump maps?

We use normal maps in particular. We make these three different ways:

Can bump mapping be applied to 1st person weapon models, and what sort of materials are there that can be used other than the ones shown on the E3 demo?

You can use bump maps on weapon models. There are a number of shaders that these materials can use including custom ones that you make yourself ' the number of those is infinite.

What sort of occlusion support do you have?

We have a PVS system. We also make heavy use of a dynamic occlusion system in outdoor maps.

What's the largest possible texture?

2048 x 2048. If there is a need for larger textures, we can up this limit.

Would it be possible to make a GTA3-style mod in Source?

Yes. You would use the vehicle system and AI driver model that we already have. You'd probably downscale the world units to allow for the larger city sizes. The mission structure is very doable including custom UI for setting up the missions.

What's the most complex shader that you've written?

As far as rendering complexity, probably the fully generic diffuse and specular bump mapped shader that we use on models. Some other complex ones are the refraction effects (like the Gordon stained glass in the E3 demo) and a camouflaged model shader.

What are the capabilities of VGUI2?

VGUI2 is much more robust and feature complete than the original VGUI in Half-Life 1. We've added systems to allow script files to dynamically animate the positions and other variables in VGUI2 panels. Mods will also have source code to all of the default controls.

You can also draw 3D things into VGUI panels so you can put a rotating model inside your menu. Also, you can have a VGUI panel in your 3D world that you can manipulate so you can have an object in the world containing your menus, etc.

Were the I-beams and dumpster in the trap town demo scripted or physics + AI?

They were all AI, we just hinted for the soldiers to stand in opportune spots so they could get smashed by the heavy physics objects. :)

How will the weapon system be set up regarding viewable models in 1st and 3rd person view? Will the same mesh be used for both or will it use a system similar to Half-Life 1 with the _w _p system?

Weapons have two models - the in-view model and the world model (seen in 3rd person). You can set a separate FOV to render the in-view model if you'd like. Our artists find it easier to get the weapons to look the way they want with this feature.

Is it possible to have modeled bullets like in Max Payne?

Yes. There are already similar effects being done in Half-Life 2. The limitations and performance issues with this are minor. It would be harder to do this in a networked game, but still possible.

Steam

How can we use Steam to deliver card-specific resources?

We can deliver special versions of content to users with particular hardware based on the hardware that they have in their machines. Using this, you could conceivably make a mod that really pushes the limit of hardware and requires specific hardware to run. With the flexibility that you'll have with rendering, I could see mod makers go nuts with this.

What role will Steam play in the mod community?

First, it's a great development tool for running betas, rolling changes, getting instant feedback, etc. Second, it will improve the distribution problem -- your mod will be played by more people. Finally, it will introduce a more direct path to consumers for those who want to take their mod into the commercial space.

Half-Life 1 Related

Will it be possible to port content from Half-Life 1 to Half-Life 2?

We've made this as painless as we could.

We learned a lot through our experiences with TFC, Counter-Strike, Day of Defeat, and so on. This engine is much more mod'able than Half-Life 1 was, and the tool set has been improved a lot. We'll also be releasing a bunch of material to help mod teams get their existing work up and running on the new engine.

The new Hammer will load Half-Life 1 levels (.RMF or .MAP source files only). You will need to retexture, and you'll probably need to redo most of the entities, but this is a major jumpstart compared to starting from scratch.

Models will need a little touching up in QC files and also need to be recompiled in order to work in the HL2 engine. Again, you must have the original source data in order to do this. We will release all of our in-house tools for 3DSMax, XSI and Maya with the SDK.

Almost all parts of the HL2 SDK will be a little bit familiar to those who have worked with the HL1 SDK. Of course, there are a lot of new features, options, and systemic changes to explore, but overall the modding experience for HL2 is very similar to HL1.

Will the HL2 SDK include tools to help convert HL1 content to HL2?

Yes, the HL2 SDK will include several tools for this; for example, taking a WAD file and converting its contents to the new material format. Details and tips about converting HL1 code and content to HL2 will also be included.
This article was originally published on the Valve Editing Resource Collective (VERC).
TWHL only archives articles from defunct websites. For more information on TWHL's archiving efforts, please visit the TWHL Archiving Project page.

Comments

You must log in to post a comment. You can login or register a new account.