SharpLife - Dot Net Core based modding p Created 5 years ago2018-05-06 19:15:02 UTC by Solokiller Solokiller

Created 5 years ago2018-05-06 19:15:02 UTC by Solokiller Solokiller

Posted 5 years ago2018-09-16 16:20:02 UTC Post #340896
Thanks!

I implemented gl_overbright since i was asked about it.

Without overbright:
User posted image
With overbright:
User posted image
Posted 5 years ago2018-09-16 16:31:27 UTC Post #340897
Could you implement HD skyboxes?
Posted 5 years ago2018-09-16 16:47:05 UTC Post #340899
You mean skyboxes that have a higher resolution? That's already available. It doesn't support TGA skyboxes yet, that's going to have to wait until ImageSharp supports it.
Posted 5 years ago2018-09-16 17:47:54 UTC Post #340900
Yes i mean higher resolutions. Especially very very high ones for example 2048x2048

And if you want to add bug fixes it would great to add these things below. :)

When you shoot sky it will be gunshot particles coming out from the sky brush, can you fix it.

A problem i noticed in hl sdk. It isnt posible to change colors of blood decals. I guess decal colors are hardcoded.
I can change color of blood sprites. but blood decal colors stays the same.
User posted image
User posted image
It would also great if you manage to seperate blood sprite color from palette.
I mean i dont want to use #define BLOOD_COLOR_RED (BYTE)247
Is it possible to use like this below,
blood_color human;
human.r = 256;
human.g = 0;
human.b = 0;
And this would affect both blood color and the blood decal color at the same time.

It would great if you add hammer modding support.
For example add a weapon_generic and monster_generic who can change their models and attributes in hammer.
It would be way easier for modders since they dont code new guns.
They would add a weapon_generic and customize their damage,clip etc.

EDIT

And It would great too if you could implement 3d skybox support like source does.
So people like me who cant map organic stuff like clifs, can map a boxy play area and surrounding .mdl clifs :)
Goldsource is easier to mod and suitable for smaller development teams. And more suitable to amateur modders.

And I think this skybox would not interfere with retro style of vanilla goldsource. But i dont know is it posible to implement in gameside programming.
Posted 5 years ago2018-09-16 18:14:31 UTC Post #340901
The first one is in game code, all it needs is a check on material type. A separate material type "none" would be needed to disable effects, that's easy.

Blood color is that way because it's legacy Quake code, i plan to make these use RGB color values instead so that isn't a problem.

Mod support will have to wait until everything else is done, but i'm sure it's possible to add at some point.

I was already going to add 3D skybox support at some point. I don't see why this shouldn't be done since it's fairly simple and can add a lot to a map without costing much.
I've also just implemented fullbright support, since that goes hand in hand with overbright.

I've also added a cvar mat_powerof2textures to disable textures rescaling to power of 2.
Posted 5 years ago2018-09-20 14:52:19 UTC Post #340924
I've refactored the code that handles model loading and rendering, each model type is now completely in its own library.
The file format, the code used to convert it into an IModel instance and the code used to load graphics resources and render it is all in it.

This isn't finalized yet, i want to make the graphics part only handle the loading of resources while leaving the rendering up to the user to allow the same code to be used to render models in-game and in separate viewer programs. IModel handling should eventually be moved out to SharpLife.Game.Shared since it's primarily used by the game.

The renderer has been moved into the client, allowing it to control how everything is drawn. This means all graphics related code is now handled by the client game library.

The engine no longer handles specific model types as map loading is fully handled by game code now, so in theory you could completely replace it with new formats without the engine having to know.

These changes make it easier to handle the remainder of the model rendering code since each type should ideally get only the data it needs to render itself. Since the game handles everything it's easier to handle specific types.

I've also refactored the model manager to remove the explicit handling of bsp submodel loading. The bsp loader now does this, though there is still a need to explicitly add the models to the string list later on so they're in the list. I may change this to allow models to specify what their submodels are so it can be handled automatically there as well.

Another important addition is the bridge between the game client and server, which allows them to share data and communicate directly.
Currently it shares a helper object that's used to format bsp filenames, but i have plans to use this to add debugging features. I'll probably be able to show what i have in mind in the next few days. It should be pretty interesting to see, but i'd like to see if it's possible before saying more.

There is more refactoring left to do to move code from the engine to the game. I'd like the map command to be handled as a shared game command, removing the need to reference saved games in the engine as well as removing the need to pass additional state around when changing levels in singleplayer.

The SharpLife.Renderer library contains types that i'd like to move as well, which should turn this library into a utility library containing interfaces and helpers for rendering and managing resources.

I'm also going to rework the object list networking system so that its delta networking system doesn't have any loss in precision, which was an issue when networking the studio model Frame variable.

When all this refactoring is done the model rendering code should be easy to use in a model viewer. A pretty important bug was reported a few days ago about skin families being missing when the model has a separate texture file, so i'd like to get the new model viewer done soon. To that end i've looked into WPF a bit to see if it has what's needed to make a replacement viewer, and it looks like it's good enough, though i did have to get a third party library (https://github.com/xceedsoftware/wpftoolkit) to cover common controls.

Dot Net Core 3.0 will include support for WPF, but that won't be released until 2019, and when it does the library i used above may not be available right away. i'll try to convert the SharpLife libraries from Net Core to Net Standard so they can work in the Net Framework WPF environment, so until WPF on Core is ready to handle everything it can still work like that. Unfortunately this may result in an excessive number of libraries being loaded so that's something to keep an eye on.

We've also been talking about adding an I/O system to SharpLife. This feature would be purely in game code and would allow the same kind of functionality that Source's I/O system has. I've looked into how Source's system works and it looks like the same could be done in SharpLife without requiring changes to the .map and bsp format.

Penguinboy and i have talked about it a bit and if editor level support can be added it should all work fine. Since Sledge is projected to support Source as well the same UI can be used, and the only difference would be in how the data is stored on disk in the .map format.

For backwards compatibility purposes the old trigger system will still be supported, but i have been working on a tool to convert maps that could also convert the old triggers to the new system. This tool is meant to convert older, more limited keyvalues (like func_door's index based sound keyvalues) to a new format, and outputs are stored the same way so this should work out.

For now the focus will be on reworking the model rendering code to use separate data types to pass around rendering data, this will allow me to pass studio model specific data around without the other formats also getting it. After that the debugging tool i mentioned above should be implementable.
Posted 5 years ago2018-09-21 00:44:07 UTC Post #340927
WPF will run on .NET Core 3 but it'll still only work on Windows. A model viewer doesn't have an overly complicated UI so maybe one of the cross platform UI frameworks is viable?
Penguinboy PenguinboyHaha, I died again!
Posted 5 years ago2018-09-21 06:39:07 UTC Post #340928
I tried out Avalonia, it's similar to WPF but it's not quite there yet. The toolbox doesn't show Avalonia controls but other than that it seems to work.
It currently targets NET standard, the problem is that some of NET Core 2.1's features aren't available in standard, and i'm already using them. So regardless of whether i'm using WPF or Core, it won't be able to work out of the box unless i either remove the features or use the OOP packages that contain the newer features and hope it all works out.

EDIT: i'm not 100% sure if Avalonia can use Core libraries or not, will need to check.

Microsoft's post on Core 3 said they were going to release a preview in 2018, so it should be available soon. I'll put something together once that happens so i can try things out, and then upgrade later on.

There's an article released just today that says NET framework is done: https://medium.com/@andy.watt83/the-net-framework-is-done-8aec3bbae12d
So i would assume that MS will eventually focus on cross platform GUI support. I'm guessing Core 4 will bring support, but there's no telling how long that'll take.

For now i'll stay focused on the engine itself, but i could always use ImGui for a basic GUI if a model viewer is needed sooner.
Posted 5 years ago2018-09-22 16:48:15 UTC Post #340936
So here's that debugging feature i was talking about:
User posted image
This lets you edit entities directly when you're hosting a listen server. It isn't perfect, but it's good enough to let me test different sequences, frame rates, etc without needing to set up the user input system or use console commands.

It does have some limitations, and i discovered a bug in the networking system where objects that haven't updated in a few frames try to send a delta when they shouldn't, but it's going to make implementing studio models a lot easier. It'll also come in handy down the line when tweaking entities.

I've also finished reworking the renderer so now each model type has its own input data type. This will let me separate out the keyvalues to entities that actually need them, and it avoids having to fill in values that aren't used for some types of models.
Posted 5 years ago2018-09-23 11:23:33 UTC Post #340941
User posted image
Skins and bodies work, i've also implemented the game's frame animation logic so it's smoothly animating now (it wasn't before).

Still need to implement lighting, chrome and modify the bone controller logic a bit to match the original. Render modes and fx is next, that should just about cover studio models.
Posted 5 years ago2018-09-26 01:37:59 UTC Post #340949
Are you doing origin lighting like GldSrc or going with fancy vertex lighting?

Amazing work btw, every post impresses me.
Tetsu0 Tetsu0Positive Chaos
Posted 5 years ago2018-09-26 09:05:32 UTC Post #340951
That depends on whether or not i can reproduce the original lighting. I've been looking into it, it seems that lighting uses transformation matrices that are identical to bone matrices for hardware renderers. Since that's already available to the shader it may be as simple as passing along the list of lights to the shader (3 lights maximum for studio models).

I'll know more once i've finished researching the original lighting system.

In the meantime, here are some of the render modes in action:
User posted image
From left to right: Normal, Color, Texture, Glow, Alpha, Additive

Normal and Additive have their own settings, the rest all use Texture's settings.
User posted image
Here you can see scale -1 and masked textures.

Scale -1 requires the cull face to be set to front instead of back, which requires every render mode pipeline to have 2 copies for both sides. This is also needed for Counter-Strike's cl_righthand cvar.

Masked textures require 2 copies of those as well.
User posted image
Here's the additive render mode for textures. The mask's eyeholes are black but due to the use of additive they become transparent, taking on the green color of the face texture behind it.

Not yet implemented are lighting and chrome, render fx, flat shading and matching the logic from the game's renderer exactly.
Posted 5 years ago2018-09-28 08:58:58 UTC Post #340960
User posted image
User posted image
User posted image
User posted image
User posted image
From top to bottom:
  1. Testing studio lighting. Only includes lightmap light data & world dynamic lights (dlight). Point entity lights (elights) not yet implemented but should be relatively simple. Different color rooms used to test the effect of different colors, showing that sampling is working correctly.
  2. Testing shadows.
  3. Additive render mode for studio textures. The mask goggles are black but are transparent to allow the green color behind it to be visible. The goggles are chrome, but that isn't implemented yet.
  4. Masked render mode for studio textures. Alpha tested transparency.
  5. All 6 render modes, from left to right: Normal, Color, Texture, Glow, Solid, Additive. Color, Texture and Glow all use the same settings and behave as Texture. The upside down grunt is a test to see if negative model scales work. This is important because Counter-Strike's cl_righthand flips models along the Y axis, and requires culling to be changed.
It's not quite finished yet but it's looking good. This runs at ~400 FPS without optimizations (vis leaf culling and bounding box culling).

To make lighting work i've also implemented light_environment. This is needed because it sets the sky color and normal vector.

What's left to implement is the correct lighting values for all render modes (minor adjustments needed), elights, chrome, glow shell and perhaps flat shading as implemented by glShadeModel(GL_FLAT). This last one is very specific since only normal render mode with additive texture render mode sets it, but only after it has drawn the mesh that uses it. I doubt it's actually required since it seems more like incorrect behavior, but i'll look into it all the same.

Models used:
  • Vanilla hgrunt.mdl
  • Sven Co-op 5 hgrunt.mdl (additive render mode, not included in SharpLife repository)
  • Condition Zero props/antenna_urban.mdl (masked render mode, not included in SharpLife repository)
I've also found a presentation detailing the design of a high performance renderer from Valve: http://on-demand.gputechconf.com/gtc/2016/events/vulkanday/High_Performance_Vulkan.pdf

This could be very useful for designing a threaded renderer.
Posted 5 years ago2018-10-04 09:14:54 UTC Post #340982
Good progress... Seems you're 2/3rds done with it.
Posted 5 years ago2018-10-04 13:14:45 UTC Post #340983
User posted image
User posted image
I've been doing some backend work the past few days, mostly figuring out how the physics code works so i can implement it.

To test settings more easily i've added editable fields for vectors.

Above: Origin, Angles and RenderColor are vectors that use different representations. Origin is a plain edit field, angles is 3 angle edit fields in degrees, and RenderColor is a color picker. The antenna model is rotated to show that it does work.

Below: light_environment's settings can be edited, and are then transmitted to the client. The sky color picker is open, set to bright red. The sky normal is set to a direction that causes most studio models in the scene to use the floor color instead. The model standing on a sky brush and the scaled model use the sky color because the normal is pointing from a sky surface for both of them.

You can use this to move objects around to wherever you want, you can rotate them, change render modes, colors, etc. This means instead of having to adjust the setting in Hammer, recompiling (onlyents or otherwise), restarting the map and moving back to where the entity is you can adjust the setting in real-time and then adjust it back in Hammer.

Two uses that i personally think are really good for this are getting a good rotation angle for func_door_rotating, and movement distance for func_door.

The origin fields should be lined up horizontally, but there's an issue in ImGui.NET that prevents this from being done easily so for now it's vertical.

The sky color and normal don't affect much, as far as i can tell only studio models are affected by controlling the sky color when they're standing in the path of sky lighting. In theory you could disable lightmaps and use this to do real-time lighting, but without light data (light, light_spot, texture lights) anything indoors would be black.

I'm not sure if the compile tools strip unnamed lights, the lights in this map are still in the bsp along with the original color and brightness values, but old maps might not. light_environment handles the case where light data only contains a single color value and no brightness, so really old maps (retail maps probably) might not have all the data needed for this.

Even if that's all there, texture lights would require the original lights.rad to even try to handle them, and i don't know how expensive all this would be to calculate in real-time.

I've been asked if there is a way to modify the values in the level editor; this requires the use of Inter-Process Communication (IPC) and unique identifiers for each entity, so that the entity can be properly identified between the game and the editor. Sledge may get this as a plugin someday, but it's not going to happen any time soon.

Now that this is done i'm going to implement physics for normal entities. Once that's all there entities can move, collide, etc, and trace functions can be implemented on top to finish up some renderer visibility testing code. I'm leaving the rest of the studio model features for later because they're not critical to have right now and i'd prefer to be able to fully interact with the entities to do more testing.

After that, if i can get user input to the server working and player movement physics implemented it should be possible to move around, use and shoot stuff, etc.
Posted 5 years ago2018-10-12 16:39:38 UTC Post #341026
I was working on the prototype for SharpLife's audio subsystem and i discovered that the engine handles cue points in wav files in a really strange way.

People always assumed that having 2 cue points would cause the sound to be looped between those two cue points after the first loop (which plays the whole file).

In reality only the first cue point is considered and what everybody thought was the second cue point is actually a bug in the engine dating back to Quake 1 that causes the range of a looped sound to be limited when the sound file has more than 8 bits per sample.

It doesn't seem to be possible to use a second cue point. It is however possible to limit loops if the "purpose ID" of the first cue starts with "mark". I don't know if it's actually possible to set the purpose ID using audio editing tools (Audacity can't edit cue points at all from the looks of it).

It's very limited, the Quake source code mentions a program called "cooledit" so that might let you do it.

I can fix this behavior for SharpLife, it's pretty easy to do. But some sounds in Half-life use cue points, like for example sound/doors/doormove6.wav has a cue point named MARK964. I can see in my sound system that it has a purpose ID "mark", and 964 is the "Cue Point ID" used to map the cue elsewhere.

I'm not sure if the name is actually what's used here or if GoldWave is just combining the two IDs, but when i modify it the cue is saved as a label instead of as label text and a label. The label text contains sample length information that controls how long the loop section should last for, but the file i tested has it set to the length of the file.

The cue point ID is set to 0 by GoldWave, and the cue position becomes 964 which seems to indicate that it's supposed to be the offset in the audio file where the cue point is located. I don't know why the file originally had it different, since the position is 0 there. The engine doesn't read the cue point ID at all from what i can tell, so it basically isn't using the cue point for looping at all.

I assume Valve made their sounds using this cooledit program, but it doesn't explain what their intentions were with these looping sounds. Since this design dates back to Quake 1 they may not even have known about the different behavior at all.

I checked another file, sound/plats/rackmove1.wav and it also has a cue point in it, named MARK514. It also doesn't store the actual cue position where it should and so doesn't loop either.

Neither sounds really need a cue point anywhere other than the start (they are needed to loop the sound), so they probably didn't even notice anything was wrong. For anyone modifying the sounds the cue point positions will matter since GoldWave and probably other programs will convert the cue points to correct this mistake.

Does anybody know more about this? I've seen the discussions on the VDC ( https://developer.valvesoftware.com/wiki/Talk:Looping_a_Sound) but those are inconclusive and only apply to Source. I'm sure it's very interesting to know how looping is actually done but if no tools exist to set the sample length property for a cue point then it isn't really useful.

I also found that GoldWave writes an additional LIST chunk that follows a really old (less than 2 months younger than me) spec: https://www.aelius.com/njh/wavemetatools/doc/riffmci.pdf

The List info chunk can contain information such as copyright. it tripped up my loader and probably breaks NAudio as well.
Posted 5 years ago2018-10-12 18:06:42 UTC Post #341027
I think you've hit the nail on the head. Nobody really knows about looping sounds in GS; this makes sense as I've tried multiple cue points in those files and doesn't have the same effect it does in source.
I've always just used goldwave with two named loops of start and end, but now I'm enlightened with the knowledge this does almost nothing at all.

Embedded cue points are really odd, the half life games are the only thing I'm aware of that use them and support
for them is so rare hence that only cooledit / goldwave are the only programs that support it. A sound VMT style thing would be perfect as a replacement
Instant Mix Instant MixTitle commitment issues
Posted 5 years ago2018-10-14 01:49:10 UTC Post #341035
Looks like "cooledit" is now Adobe Audition: https://en.wikipedia.org/wiki/Adobe_Audition

The one time I tried playing with looping/non-looping sounds it didn't seem to work so I moved on to playing with something else.
Posted 5 years ago2018-10-22 17:43:35 UTC Post #341061
A sound VMT style thing would be perfect as a replacement
I could do something that's similar to Source's soundscripts: https://developer.valvesoftware.com/wiki/Soundscripts

Given that they implemented stuff like operators it may be easier to let these be defined using scripts instead, so you can express properties and events programmatically.
I've been working to implement physics support, so far i've got the majority of regular entity physics implemented, with func_door mostly implemented for testing.

Since there's no entity triggering or user interaction yet i added a feature to the object editor to invoke a named parameterless method so i can trigger the door movement methods directly. It works, but it's choppy so i'll need to figure out why that is. It could be related to the framerate or some value being used as int instead of float, but it could also be a networking issue.
I've also made a small prototype IO system. You can define inputs and outputs, trigger them and pass values just like Source's version. Here's what the code for it looks like:
[Output("OnPass")]
public OutputEvent<float> OnPass { get; } = new OutputEvent<float>();

[Output("OnFail")]
public OutputEvent<float> OnFail { get; } = new OutputEvent<float>();

[Output("OnPrint")]
public OutputEvent OnPrint { get; } = new OutputEvent();

[Input("TestValue")]
public void InputTestValue(in InputData inputData, float value)
{
    Console.WriteLine($"Firing {value}");

    if (value != 0)
    {
        OnPass.Fire(EventQueue, inputData.Activator, this, value);
    }
    else
    {
        OnFail.Fire(EventQueue, inputData.Activator, this, value);
    }
}

[Input("Print")]
public void InputPrint(in InputData inputData)
{
    Console.WriteLine("Print");

    OnPrint.Fire(EventQueue, inputData.Activator, this);
}
Inputs are public methods marked with an Input attribute, which takes the name of the input as used in the map editor or console commands. They can optionally take a single additional parameter that is the value that outputs can pass to it (or the parameter override if specified). Inputs can also access the value as an object instance through the InputData instance passed to the method, which allows for variant type inputs.

If the wrong type is passed to an input and it can't be converted, the input won't be called, just like in Source.

Outputs are public properties or fields marked with an Output attribute, which takes the name of the output as used in the map editor or console commands.

Outputs can optionally return values, in which case you have to specify the type in the member declaration. This also enforces the requirement that you pass the correct type when you fire the output.

Firing outputs is straightforward: you pass the event queue used to track events for entities (there's usually only one), the activator, the caller, and a value to pass to inputs if the output returns a value.

The event queue is similar to Source's version: a linked list sorted by event execution time, designed so that new events are added to the end of the list of events fired a a certain time so that events with delay 0 added while an event is being fired will fire in the same frame.

As far as supported return types goes, if you register a converter it'll work. Converters are subclasses of the C# class TypeConverter with ConvertFrom and CanConvertFrom overridden. This way, the USE_TYPE enum can be directly used without having to add it to a variant type like Source requires (Source uses a hack to pass the USE_TYPE value along instead).

This also means that you can easily add support to convert between types without having to edit a massive Convert method.

I'll be integrating this IO system once i've got physics completely done so i can implement +use support. +use relies on the physics code to pass along which buttons are pressed, finding entities in a sphere and then triggering the entity's Use method.

In SharpLife this will be replaced with the firing of the entity's Use input. It will be largely the same since input parameters can be part of the method signature, unlike Source. It should look like this:
public void Use(BaseEntity activator, BaseEntity caller, UseType useType)
{
    //Value not passed to InputData to avoid cost of boxing
    Use(new InputData(activator, caller, null), useType);
}

//Usage:
entity.Use(this, this, UseType.Toggle);
GoldSource actually uses USE_SET when +using entities, so i'll need to figure that out.
Posted 5 years ago2018-10-25 12:01:48 UTC Post #341076
Cross posting this bit from the HLE thread:

I estimate it will be at least 6 months before SharpLife can deliver the same functionality as the vanilla SDK, though given that i'm incorporating HLE's features from the start it will provide much of the same at that point in time. This means things like entvars_t members being accessed through properties (SL doesn't have entvars_t), customizable entity relationships, config files, etc will be available as they are for HLE.

How long it takes depends on how difficult it is to implement original functionality.
I'm getting close with physics but it will eventually require both lag compensation and prediction to be implemented, which relies on accessing old data. Easily accessing this will likely require restoring the old state, so i have to make sure non-networked members can be restored as well, which they currently can't.

Once physics is implemented i'll have client input working, which means the user can press their use key, have that converted from +use to a usercmd_t, sent to the server and processed there so it can find and trigger entities. I should also be able to rework the camera movement system to instead use player position in the world at that point.
Everything needed to test physics will be available by that point, so i can verify that it all works.
I also need to make sure the networking system can handle infrequently updated objects properly. Currently it will still send updates for objects that haven't changed so the client can track them, this needs to be improved.
I also started working on a prototype for a GUI, starting by getting the drawing code operational. I studied Doom 3 and ImGui's code for this, they both use a similar method where all drawn primitives are stored as a list of vertices and drawn with a texture. Simple colored primitives use a white texture, text is ultimately also a texture (one character is typically referred to as a "glyph" in this context), so if i can get that working it should be good to go.

VGUI1 can only draw text, filled and outlined rects and textured rects, which my current implementation should be able to handle. Things like rounded corners can be done by using partially transparent textures (alpha channels masking the rounded part), so it shouldn't be that difficult.

The actual widgets like buttons require a good widget management design and class hierarchy, but that's not hard to figure out.
Once this is all working then the engine should be mostly there. Some features are unfinished (brush models don't render water properly, don't do random or animated textures yet, studio models are missing chrome and glow shell, sprites could use some optimizing), but the essentials needed to play a map should be working at that point.
As far as audio goes, my prototype should cover the basics so i can integrate that at some point to do menu music playback and basic per-entity sounds, after that individual features can be implemented.

The stuff that's needed should be there relatively soon if i keep at it, once the engine level stuff is there i can shift my focus to game features. Most of that involves re-implementing SDK code (mostly entities), but i can keep feature requests in mind while i'm doing this.
I do have a request for people wanting to use this for their own mod: if possible could you give me some ideas/feedback on a good way to configure GUI settings? VGUI1 uses resolution specific scheme files, VGUI2 uses resolution independent resource files, so i'd like to make sure i get this part right from the start since the widget design involves getting these values during creation.

Features like interactive building at runtime like VGUI2 does would be nice, so a way to store/update these settings is nice to have. Knowing how the configuration needs to be done plays a big part in designing such an editor.

I know that some people would like to have 3D rendering support in the GUI, i have never done this before but i figure it should be relatively simple by using render-to-texture, which would let it function with the current design for GUI rendering. This is probably a well-known approach but i haven't done any research on it yet.

One good use of this would be in the multiplayer model selection window, which would make the older image based approach obsolete and allow you to view the actual model in real-time.

If anybody has ideas or suggestions i'd love to hear them.
Posted 5 years ago2018-10-25 12:26:12 UTC Post #341077
I don't know if this is the best way to do this but how about "per aspect ratio" config. with scaling?
Posted 5 years ago2018-10-25 12:38:23 UTC Post #341078
That's possible, i could make config values take effect for specific ratios. Kinda of like this:
<Property name="Position.X" value="10" condition="AspectRatio == 16/9"/>
The condition could be evaluated using CSharpScript, though in this case a constant value for 16/9 may be preferable. This would make it more flexible, for instance allowing you to support multiple ratios or different conditions (e.g. Game == "Half-Life", but that's pretty hacky).

EDIT: configs could maybe even be whole scripts. Much more efficient, and you can express things more easily by using the actual code. It would unify the GUI code in the game codebase and config files, meaning you can use the same API.
Posted 5 years ago2018-10-25 20:25:07 UTC Post #341079
A lot of apps use a web engine for their GUI (I do NOT suggest this for your project) and I think a big part of the attraction is the access to all the CSS units and properties which make it easy to produce UIs that scale for any resolution and aspect ratio. Units like vw, vh, (r)em, and percentages. Maybe they can serve as inspiration somehow?
Posted 5 years ago2018-10-25 20:37:28 UTC Post #341080
Yeah we discussed this, i can add support for different units. It may be best to implement things like CSS styling, margin and padding for flexibility.
Posted 5 years ago2018-11-01 12:25:40 UTC Post #341113
Avalonia announced that they can recommend its use in production: http://avaloniaui.net/blog/2018-10-30-avalonia-0.7-release

I'll be taking a look at it sometime soon to see if i can make a model viewer replacement with it. According to some of the comments they don't yet have OpenGL support, but DirectX is supported. Figuring out if and how they support integration with Veldrid will be an important part of determining whether it's suitable for use at this time.
I've made a basic prototype for plugin management. It covers the essentials needed for assembly based plugin loading to work.

The manager loads plugin configuration from a file, much like Sven Co-op's plugin system does and similar to HLE's proposed configuration syntax: https://github.com/SamVanheer/HLEnhanced/wiki/Plugin-manifest-(temp-page)
<?xml version="1.0"?>
<PluginListFile xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
	<Plugins>
		<PluginConfiguration>
			<AssemblyName>TestPlugin.dll</AssemblyName>
			<PluginClassName>TestPlugin.TestPlugin</PluginClassName>
			<Arguments>
				<TestPluginArguments>
					<Name>Test Plugin</Name>
				</TestPluginArguments>

				<cvar name="developer" value="2"/>
				<cvar name="sv_cheats" value="1"/>
			</Arguments>
		</PluginConfiguration>
	</Plugins>
</PluginListFile>
A plugin is loaded based on assembly name, the plugin is instantiated based on the given class name.

If the class implements IPlugin, it will create an instance by using dependency injection to provide any objects needed by the plugin.
This could be used to get interfaces like the entity manager, networking, etc. Each plugin can also request the arguments given to it in the config file by specifying a constructor argument of type PluginConfigurationArguments.

Here's a basic example of a plugin:
using PluginHost;
using System.Xml;
using System.Xml.Serialization;

namespace TestPlugin
{
    public sealed class TestPlugin : IPlugin
    {
        public string Name => "TestPlugin";

        public TestPlugin(PluginConfigurationArguments arguments)
        {
            var serializer = new XmlSerializer(typeof(TestPluginArguments));

            var myArguments = serializer.Deserialize(new XmlNodeReader(arguments.Elements[0]));
        }

        public void Initialize()
        {
        }

        public void Shutdown()
        {
        }
    }
}
This plugin takes only the config arguments, and then converts it into an object of type TestPluginArguments, which looks like this:
namespace TestPlugin
{
    public class TestPluginArguments
    {
        public string Name { get; set; }
    }
}
This allows you to specify complex arguments and read it with ease. Note that the example does not do error handling, but that's not hard.

This can easily be used to pass configuration around, like requiring cvars to be set or even created dynamically.

Interface versioning is an important part of supporting plugins properly, breaking changes would require the assembly that provides the interface to change its assembly version, which will cause assembly load to fail with an error noting the version conflict. That makes it relatively easy to avoid cases where plugins rely on interfaces that have changed or have been removed.

For changes that don't break the API, like new methods it won't be necessary to change the version. I'm not 100% sure how C#'s interfaces work in this type of situation, but following the general rule of adding to the end of interfaces might be good enough, or alternatively adding new interfaces that extend from them.

Accessing plugins in game code should only rarely be needed, and can be done in a couple different ways.
Most of the time you won't need to access a single interface, just all of them, so iteration is the common case:
pluginManager.ForEach(plugin => plugin.DoSomething());

foreach (var plugin in pluginManager.GetPlugins())
{
    plugin.DoSomething();
}
The first option is the simplest and won't break easily. It does however create delegates which can add overhead in critical per-frame game logic like RunFrame. The latter option avoids that cost.

Most plugin access in SC revolved around hooks. The way it accesses hooks accounts for the fact that plugins can be reloaded. in C# assemblies can't be unloaded individually, only the app domain in which they've been loaded can be unloaded. Using app domains means direct access to plugins isn't possible, instead it would use proxies which adds too much overhead here, since plugins will define and access entities, among many other things.

This means that once loaded, a plugin will stay loaded.

As such, the hook system is best implemented by extending the event system currently used in the engine. Making each event always pass an Event object around that has properties to support preempting further processing of an event. This is similar to features that exist on most GUI frameworks such as JavaScript's preventDefault.

This would make preemtable hooks such as SayText pretty easy to implement and use, and avoids the direct reference between the code that emits the event and the plugin system.

Registering such an event handler would be done by the plugin:
public sealed class TestPlugin : IPlugin
{
    private readonly IEventSystem _eventSystem;

    public string Name => "TestPlugin";

    public TestPlugin(IEventSystem eventSystem)
    {
        _eventSystem = eventSystem ?? throw new ArgumentNullException(nameof(eventSystem));
    }

    public void Initialize()
    {
        //The event system can deduce the event name from the event object type
        _eventSystem.AddListener(OnSayText);
    }

    public void Shutdown()
    {
        //Remove all listeners registered by this plugin object (but not other objects created by the plugin!)
        _eventSystem.RemoveListener(this);
    }

    private void OnSayText(SayTextEvent @event)
    {
        //Chat text starting with this command is hidden and suppressed from further event handling
        if (@event.CommandLine.FirstOrDefault() == "/hide")
        {
            @event.Visible = false;
            @event.Cancel = true;
        }
    }
}
This way plugins can plug into the engine and game without ever needing to expose the concept of plugins anywhere.

By making it register event handlers in Initialize and removing them in Shutdown plugins and the game itself can be rebooted, which can flush all past state.

In similar fashion plugins can also create and acquire console commands and variables by taking an ICommandContext. The same goes for registering entities; taking the entity factory and registering itself as providing a set of entities that override the normal entities so it can override built-in entities when needed (that isn't implemented yet).

The only downside compared to Sven Co-op and HLE's versions is that you can't reload plugins. To solve that problem CSharpScript based plugins may be used. There is ultimately very little difference between the two once the IPlugin instance has been created, and scripts can be loaded again, although if i understand it correctly the script will stay in-memory even if it isn't used anymore.

CSharpScript does have some downsides but i haven't used it enough to know how much difficulty those add. Handling the difference in the plugin manager is pretty straightforward, and supporting both here means map-specific support can be added by using a map-specific plugin manager. The same config file structure can be reused for everything, the only issue would be that assemblies would stay loaded in-memory.

I don't know how much overhead this adds in memory usage and other costs, but given that the SharpLife assemblies are between 25-80 KB in size when built as Debug, i don't think it's that much. Worst case scenario servers would need to reboot every so often to flush its memory, which they often need to do due to crashes anyway.

Since most maps that use scripts also tend to become popular the memory cost would only have to be paid once, so that does make things better.

This is only a basic prototype of a plugin system, i'll probably rework it to avoid directly depending on XML serialization so it can be used independently.

I would like to use this for HLMV to let people add functionality to it themselves more easily. Since people request all kinds of features, making it possible to easily add it yourself would really open up model customization if you could modify the model data from a plugin.

If anybody has any opinions about this i'd love to hear it.
Posted 5 years ago2018-11-26 16:18:43 UTC Post #341330
I've made a standalone scripting system that makes it easy to load scripts from various sources: https://github.com/SamVanheer/SharpLife.Scripting

Comes with support for assemblies and CSharpScript. The design makes it easy to add support for other scripting languages as long as you can do interop with it.

It's a separate repository so it doesn't fall under the HL SDK license.

The sample program i've included with it is a basic WPF app that lets you load scripts and display the Description property that all sample scripts must provide. Unfortunately since this is a NET Framework program it pulls in a lot of dependencies, and indirect dependencies don't seem to work so i had to explicitly reference some NET Standard libraries.

Hopefully when NET Core 3 is released i can move the sample to Core to avoid those issues.

I plan to integrate this system into SharpLife and the new model viewer. Model viewer will use this to add support for extension plugins and scripts (e.g. add a new tab with options) which should let you add whatever you want to modify models. Since CSharpScript is plain text it should be trivial to implement small add-ons this way.
Posted 5 years ago2018-12-04 17:12:11 UTC Post #341386
NET Core 3's preview has been released, so i can start doing some experiments with model viewer's design in Core: https://blogs.msdn.microsoft.com/dotnet/2018/12/04/announcing-net-core-3-preview-1-and-open-sourcing-windows-desktop-frameworks/

I'm also working to get the Tokenizer class ready for use with .map file reading, it's now flexible enough to do all that. I've also eliminated memory allocation overhead involved with creating lists to configure it, now you can just cache the tokenizer configuration instance and reuse it.
Posted 5 years ago2018-12-12 23:18:11 UTC Post #341443
Have you used Unity? Since this is c#, I imagine a lot of Unity developers would be able to jump onboard for small projects if you could find a way to let them attach .cs scripts to entities via Hammer/Jackhammer, just the workflow that exists with GameObjects in Unity's editor.

Mockup of an entities tab in JACK (point/brush based):
editor with entities listeditor with entities list
right click on entity to create script for this one gameobject that inherits the entity classright click on entity to create script for this one gameobject that inherits the entity class
Unfortunately J.A.C.K is closed source as far as I know, otherwise the UI part of this idea would have be really easy to implement.
(edit: just found out about Sledge, cool! So that's an option)

Bad example with the Bool MyTouch function since everything after calling base() would not execute cause it returns a value, but you get the point.
bonbon bonbonh-huh?
Posted 5 years ago2018-12-14 08:42:21 UTC Post #341450
I've never used Unity before, but i already planned to do something similar. If you look at Sven Co-op's scripting API you'll see that the design is fairly similar, my design for SharpLife is a refined version of that. I'll need to look into making it easier to make single class scripts that are automatically associated with an entity since it's currently designed to use single scripts for entire maps.
Posted 5 years ago2018-12-26 16:47:36 UTC Post #341490
I'm currently working to make SharpLife.Scripting more flexible when using CSharpScript to make it usable for client side scripting. Currently scripts can access whatever they want so they can use things like reflection, filesystem APIs, etc, which can cause security issues. To prevent this from happening, i've been working on a way to explicitly register an API that scripts can access.

It currently looks like this:
private static void ConfigureAPI(APIConfigurationBuilder builder)
{
    builder.AddAssembly(typeof(App).Assembly);
    builder.AddNamespace(typeof(App).Namespace);

    {
        var type = builder.AddType<App>();
        type.AddMethod(a => a.Test(default, default));
    }

    {
        var type = builder.AddType<string>();
        type.AddMember(() => string.Empty);
        type.AddMethod(s => s.Split(default));
    }
}
First you add assemblies that you want scripts to be able to access. Then you can add namespaces and types to grant or deny access to.
Types can have properties, fields and methods added where you use lambdas to specify the member being added.

It's possible to specify whether you're adding something to allow or deny access, or to inherit the default access settings for that particular type of API functionality (namespace, type or member). These settings can be configured, whichever access rule is set when the configuration is built will be used for all types that don't specify a rule to use.

As you can see in the example you can select method overloads. Since you only need type information and not actual values use of default and default(type) is pretty common.

Once a configuration has been finished it becomes immutable, preventing changes from being made to the API.

By default, an empty APIConfiguration instance denies access to everything except built-in types (int, float, bool, string, etc).
If a script accesses an API that it isn't permitted to access the script is never compiled, so no code is generated and no static initializers will run, preventing any potentially malicious code from executing.

Malicious code includes but is not limited to using filesystem APIs to create files that interfere with client side game functionality, using reflection to access engine functionality to manipulate game behavior (e.g. slowhacking), and using web APIs to download other files that could be used to execute more malicious code.

A properly configured API configuration can prevent abuse while still allowing access to any APIs that are needed. It is possible to block access to parts of types that can be abused, for instance you could allow querying of types and their members, but not allow any invocation of members through reflection.

API configurations are separate from script providers, so one configuration can be used by multiple providers at the same time (e.g. CSharpScript and Python).

I'm still working on it, but most of the functionality is already working. I'm using the Roslyn API for code analysis to check parsed scripts, this API allows for much more than that so it's pretty interesting.

Does this look like it makes sense? Is it easy to understand the purpose and use of this functionality, the reason why it's necessary? Does anyone have any suggestions to improve it if possible?
Posted 5 years ago2019-01-05 18:13:33 UTC Post #341555
I've been working on the scripting system some more. I was going to rework CSharpScript's script class loading behavior to make it work more like the assembly based provider does things, but i discovered that there is no way to directly access the generated assembly's Reflection instance. I've requested that this be added: https://github.com/dotnet/roslyn/issues/32200

Once this is possible the assembly and CSharpScript providers should behave largely the same way. There won't be any need to return the type of the script class, and a callback in the script info instance will allow the user to resolve the correct class. This will make it much easier to implement per-entity custom classes by simply finding the class defined in a script that inherits from the right base class as shown in bonbon's image.

That callback should just do this:
private static Type FindEntityClass(CSharpScriptInfo info, Assembly assembly)
{
    var scriptClasses = assembly.GetTypes().Where(t => t.IsClass && typeof(TScript).IsAssignableFrom(t)).ToList();

    if (scriptClasses.Count == 0)
    {
        //No classes found that extend from the required type
        throw some exception;
    }

    if (scriptClasses.Count > 1)
    {
        //More than one candidate class found
        throw some exception;
    }

    return scriptClasses[0];
}
Where TScript is the type of the entity that it needs to inherit from.

The same thing will be possible using the assembly provider, which makes it easier to use both.

With this in place the game should have a simple scripting system that lets you load scripts once and then create instances of types when you need them like this:
//The script system should cache the loaded script so repeated calls return the same script instance
var script = Context.ScriptSystem.Load("my_script_file.csx");

//The script itself doesn't contain any object instances, these are created on demand only
var entity = script?.CreateEntity(Context.EntityList, typeof(Item));
It may be easier to rework the scripting system to only provide an abstraction for the provider and letting the user get the type information using the resulting Assembly, but this would restrict the supported languages to whatever can produce an Assembly instance.

In that case i can remove the type parameter from the scripting system itself, since the only purpose is to make it easier to access individual script object instances and it's better to let users keep track of them. Perhaps splitting the system up into providers and type instance factories would allow for both, i'll need to look into it further.

I'm also going to make sure that scripts can load other scripts using #load, which CSharpScript already supports but which requires me to provide a SourceReferenceResolver. This should let you split scripts up just like you can with Angelscript. That in turn means that this resolver will need to understand how the SteamPipe filesystem works. That shouldn't be a problem since the resolver class API is pretty easy to use, and should just pass through to the filesystem class i've already created for SharpLife.

Edit: i just discovered that there is an existing framework that does pretty much the same thing called Managed Extensibility Framework: https://docs.microsoft.com/en-us/dotnet/framework/mef/

It looks like it can do pretty much everything that this system can currently do, albeit without CSharpScript support. I'm going to look into it some more to see if it can do what i'm doing with my own system.

Edit: MEF can do everything that my system can do, and for the use case i have in mind for it each script assembly would need its own CompositionContainer which eliminates the problem with adding assemblies later on.

That leaves only one problem which is memory usage. I'll have to look into how much it uses for small assemblies like scripts. I read some reports about high memory usage due to dependent assemblies being loaded, but CSharpScript requires explicit references to all dependent assemblies so they have to be loaded already. Indirect dependencies might be a bigger issue but that remains to be seen. The biggest issue i need to look into is how much memory a CompositionContainer needs on its own.

Edit: Looks like an empty container is just 320 bytes in total, though that may not be 100% accurate. Looks like it's worth using like this then:

Assembly and CSharpScript based plugins are all part of one container, exporting their main script implementation using MEF. Required interfaces can be exported from the game and imported by plugins using MEF, allowing for dependency injection on both sides.

CSharpScript based map-specific scripts get their own container, with scripts being cached along with their container based on file modification time to allow easy reloading. This reduces memory usage by loading scripts only once per server lifetime. Since a compiled script is just an assembly it can't be unloaded, just like a regular assembly. Otherwise it'd leak tons of memory by way of redundant compilation and assembly generation.

Scripts that are reloaded, either due to a newer version existing on disk or by forcing a reload (cheat command) would throw out the cached version and reload it entirely. The old version would continue to be in memory and objects created from it will continue to exist and be referenced as before, so a complete refresh is needed to oust old versions (reloading the map should do it since it's a map specific script).

I'm going to look into reworking the scripting system to support this, so most of my current provider code will probably end up being thrown out. The script validation code and API configuration types are still needed so most of it will still see use. This will make it impossible to use non-CLR based languages for plugins or scripts, but i doubt that's a big deal since C# and VB are more than enough to support the required use cases.
Posted 5 years ago2019-01-09 18:00:31 UTC Post #341604
I've committed the latest assemblies for the master branch. The previous ones were a couple months out of date, and i made some changes to debug logging to make it easier to figure out why startup could fail.

The native wrapper now logs to <gamedir>/logs/SharpLifeWrapper-Native.log.

If you enable debug logging (DebugLoggingEnabled=true in cfg/SharpLife-Wrapper-Native.ini), you'll now get this in the log:
[09/01/2019 18:56:14 +0100]: Managed host initialized with game directory sharplife_full (client)
[09/01/2019 18:56:14 +0100]: Configuration loaded
[09/01/2019 18:56:14 +0100]: CoreCLR loaded from C:\Program Files (x86)\dotnet\shared\Microsoft.NETCore.App\2.1.2
[09/01/2019 18:56:14 +0100]: Runtime started
[09/01/2019 18:56:14 +0100]: AppDomain 1 created
[09/01/2019 18:56:14 +0100]: Managed host started
[09/01/2019 18:56:14 +0100]: Attempting to load assembly and acquire entry point
[09/01/2019 18:56:14 +0100]: Created delegate to entry point SharpLife.Engine.Host.NativeLauncher.Start
[09/01/2019 18:56:14 +0100]: Attempting to execute entry point
[09/01/2019 18:56:16 +0100]: Entry point executed with exit code 0
[09/01/2019 18:56:16 +0100]: Shutting down managed host
[09/01/2019 18:56:16 +0100]: Exiting with code Success (0)
The engine now returns an exit code indicating if anything went wrong. Since the engine can't log anything until the game directory is known, a missing game directory is logged as UnhandledException since it throws an exception. If there are no command line arguments at all the error is NoCommandLineArguments.

This should cover the different problems during startup.
Posted 5 years ago2019-01-11 19:41:12 UTC Post #341619
Could you please stop making off topic posts in this thread? I'm also not interested in you trying to get my attention for whatever you're working on.
Posted 5 years ago2019-01-13 17:13:24 UTC Post #341633
I´ve been reading all the thread, and I must recognize that the 99% of things escaped from my poor knowledge about programming languages. Aniway, I have read the FPS recordings for very well known maps of Half-Life and I´m really impressed. How did you achieve those fps? was not the GS engine fully dependant of the CPU capabilities of rendering things?, unless you are working on a GPU rendering add-on for GoldSource (that could be cool). Or probably you are making things better managed by the engine. Sorry if it´s written and explained on the thread and I didn´t see it; but that could be the solution for many of us trying to make a mod.
I remember Half Life Enhanced and PowerSource, and it was so promising.
Posted 5 years ago2019-01-13 17:44:14 UTC Post #341635
SharpLife basically ignores the original engine and does everything itself. The original engine is only involved so SharpLife can be called a mod and used legally by modders.

As such i can implement the renderer using modern techniques which dramatically increases performance.
Posted 5 years ago2019-01-13 18:00:20 UTC Post #341636
As always, I am impressed and willing to use that with my mod if possible. Is there an approximate launch date?.
Posted 5 years ago2019-01-13 18:09:39 UTC Post #341637
No, it'll take a while before everything gets to a functional level. I can't say how long it'll take.
Posted 5 years ago2019-01-13 19:50:27 UTC Post #341638
Ok. I´ll wait. Only one last thing, what are the features that could be found in Sharplife ? I don´t mean the complex things related to the very innards of coding, of course. i know this is wishful thinking, but a huge amount of entities (yes, I´m mulish about that) and a new renderer (call me mororn, but i have recently discover that all render in HL was almost completely done by the CPU an that task was not shared with GPU) able to take profit of new GPUs will be more than enough to keep very graphically complex mods from crashing or of playing at 12 FPS. :crowbar:
Posted 5 years ago2019-01-13 20:04:20 UTC Post #341639
Both of those things should be possible.

I'm currently looking into how to make a material system to let you choose which shader is used for textures, but something like this will require new file formats and breaking changes in some places (e.g. no more render modes) so i'll have to choose between greater flexibility or supporting the original engine's way of doing things.

I'm thinking it may be better to go for a first version that supports the original formats, then making breaking changes in a V2 branch that focuses on maximizing performance at the cost of dropping features like render modes. There would probably be other ways to accomplish it though, like color can be done by using a proxy to fetch the color from an entity while rendering. I'll probably leave further research for later since it's quite complicated.
Posted 5 years ago2019-01-13 21:46:27 UTC Post #341640
If you want a mod made in the most irresponsible way in the world where good practising in coding is an utopia, and where the impossible is demanded to the graphic engine ( like hordes of npcs and lots of models with more than 12,000 polygons) to try Sharplife :crowbar: you just have to say it !.
Posted 5 years ago2019-01-14 15:20:23 UTC Post #341650
thank you solokiller we dont have experience of other engines. and we cant practice more because our profession is not related computer sciences at all.

so there is a demand for your work. so finally us can use goldsource engine with more features.

i cant map in detailed engines like ue4. and i cant make good maps in goldsource either.

But i like to mod because i want a game fits for my taste.

So bigger maps and bigger textures will compensate my skills.

you told me 3d skybox and HD sky is posible.
are bigger maps and bigger textures posible too?

And wad files are pain in the ass too xD could you make it so we dont need wadded textures.
A image file in the mod folder works very well, at least for developing a mod.
Posted 5 years ago2019-01-14 16:06:45 UTC Post #341652
I think Solokiller talked about adding the possibility of using materials (correct me if I am wrong!), which can be bigger images than the max 512x512 of GS, maybe 4098x4098 and its variations (more than enough for detailing!!).

The question about the renderer is, what would Sharplife use?, and whith what scalability? I am probably mistaking something, but when I hear about renderers I think about OpenGL and those kind of things, I don´t know if Vulkan is a possibility. What API will be used by SL?. :\
Posted 5 years ago2019-01-14 16:19:45 UTC Post #341653
SharpLife currently has a limit set to 256 but it's a cvar that you can increase to any value, or you could remove the limit from the code altogether.

The only limit i've found is that really large images fail to load in some graphics backends, i'll need to reproduce the issue and report it to see if it's caused by Veldrid or something else.

SharpLife uses Veldrid to handle the graphics backend, and that supports OpenGL, Vulkan, DirectX, and Metal on Mac. All i need to do is add the code to let you choose and set up the correct backend (just a few lines of code for each backend), and it will work. Shaders can be written in GLSL and used by all backends, so it's pretty much ready to go already, though when i last tested it Vulkan had issues. They've probably been fixed by now though.
Posted 5 years ago2019-01-14 17:39:46 UTC Post #341654
Wow, I repeat that will be the solution for all us mod makers that use very large models and maps in terms of size and polycount and add hordes of npcs!!

One question, what will be the structure of SL?, I mean, will it load a mod directly or it´ll depend of a Half-Life folder with the valve folder and all the rest of junk?, I like the way (don´t flame me please) Xash works. With it you must only create a folder with whatever name you want, you put the Xash exe, three dlls and the very folder of the mod in it and that´s all. Less than 10 Mb in size. And you can launch a full mod as if it was a standalone game (at least if you have all the needed assets in the MOD folder, of course).
Posted 5 years ago2019-01-14 18:15:47 UTC Post #341655
you told me 3d skybox and HD sky is posible.
are bigger maps and bigger textures posible too?
Both are possible.
And wad files are pain in the ass too xD could you make it so we dont need wadded textures.
A image file in the mod folder works very well, at least for developing a mod.
Due to how maps reference textures it's not currently possible to ditch wad files since you can only have 15 characters in a texture name, not nearly enough for directory names.
One question, what will be the structure of SL?, I mean, will it load a mod directly or it´ll depend of a Half-Life folder with the valve folder and all the rest of junk?, I like the way (don´t flame me please) Xash works. With it you must only create a folder with whatever name you want, you put the Xash exe, three dlls and the very folder of the mod in it and that´s all. Less than 10 Mb in size. And you can launch a full mod as if it was a standalone game (at least if you have all the needed assets in the MOD folder, of course).
SharpLife is an all-in-one package, so you'll make a mod that're the game libraries in SharpLife, and deploy the entire engine as a Half-Life mod. This allows each mod to change the engine as needed. Currently the assemblies directory is 11.2 Mb, but that includes debug builds and debug info, so it probably adds up to a similar size when it's all release builds.

SharpLife still needs a liblist.game file, used so the original engine can load the client.dll library which bootstraps SharpLife. This also allows Steam to find the mod and list it automatically.

It does depend on the valve folder, since it loads assets from there. The current design allows it to load assets from other game directories as well, so you won't have to copy over Opposing Force files for instance, which makes it easier to use game-specific files without having to redistribute anything. Ideally maps can specify which games they use assets from to help with error handling.

All of this is configurable in the code, so you can remove the dependency on original content if you want.
Posted 5 years ago2019-01-14 21:26:52 UTC Post #341656
:gak: Can´t wait. I swear. If only I could see my MOD run as it is now at more than 50 fps, I´ll be more than happy.
Posted 5 years ago2019-01-15 08:56:01 UTC Post #341661
ok seperate image files isnt possible. but 32bit texures are possible right :) 8 bit textures are pain in the ass too.

and i wonder is it posible to make projectile type bullets instead of hitscan type :)

(i know i asking too much :) )

hmm it seems with all these stuff you coded someone could make a battle royale mod XD
Posted 5 years ago2019-01-15 11:09:48 UTC Post #341662
Wad files don't support 32 bit textures so no.

You can already make projectile bullets in vanilla GoldSource, it just takes some effort. Modifying the RPG/AR grenade code should get you started.

What i talked about before about a V2 of SharpLife is what all the newer stuff should go into. V1 reproduces the original engine, using original file formats and stuff. Then backwards compatibility is broken by using new formats optimized for performance and customization and a better toolchain. If i can do that then i can solve all of the problems related to limitations caused by the file formats, texture name length being one of them.

That'll require a lot of support in the tools themselves. That's why i've been working on new compile tools, the map editor will also need to support the material system so i hope Sledge will support plugins that can provide a material system.
Posted 5 years ago2019-01-15 14:58:36 UTC Post #341663
what about 24 bit textures? ok if only v2 supports this. i cant wait to see v2 :)

and i have coded projectile bullets before. based on ar grenade but it was very buggy and doesnt works if entity limit or something exceeds.

so i thought projectile bullets not suitable for goldsrc. but if it possible without messing with network or entity limits then alright. maybe i was not properly coded.

and there is one more probleb about goldsrc. i dont want to use milkshape. its expensive and limited. i can use blender but its only supports hl2 smd files. so maybe you can make a compiler for hl2 smds to hl1 mdls.
Posted 5 years ago2019-01-15 15:04:03 UTC Post #341664
and i have coded projectile bullets before. based on ar grenade
Better use hornetgun... But the entity limit is a problem (BIG PROBLEM) as you say. How much entities will support SL at a time before struggling?, also, and remember that I am quite an ignorant in coding, will it depend of the PC specs or this number will adapt itself independently of the system on which SL will run on?
You must be logged in to post a response.