Forum posts

Posted 5 years ago2019-01-09 18:00:31 UTC
in SharpLife - Dot Net Core based modding p Post #341604
I've committed the latest assemblies for the master branch. The previous ones were a couple months out of date, and i made some changes to debug logging to make it easier to figure out why startup could fail.

The native wrapper now logs to <gamedir>/logs/SharpLifeWrapper-Native.log.

If you enable debug logging (DebugLoggingEnabled=true in cfg/SharpLife-Wrapper-Native.ini), you'll now get this in the log:
[09/01/2019 18:56:14 +0100]: Managed host initialized with game directory sharplife_full (client)
[09/01/2019 18:56:14 +0100]: Configuration loaded
[09/01/2019 18:56:14 +0100]: CoreCLR loaded from C:\Program Files (x86)\dotnet\shared\Microsoft.NETCore.App\2.1.2
[09/01/2019 18:56:14 +0100]: Runtime started
[09/01/2019 18:56:14 +0100]: AppDomain 1 created
[09/01/2019 18:56:14 +0100]: Managed host started
[09/01/2019 18:56:14 +0100]: Attempting to load assembly and acquire entry point
[09/01/2019 18:56:14 +0100]: Created delegate to entry point SharpLife.Engine.Host.NativeLauncher.Start
[09/01/2019 18:56:14 +0100]: Attempting to execute entry point
[09/01/2019 18:56:16 +0100]: Entry point executed with exit code 0
[09/01/2019 18:56:16 +0100]: Shutting down managed host
[09/01/2019 18:56:16 +0100]: Exiting with code Success (0)
The engine now returns an exit code indicating if anything went wrong. Since the engine can't log anything until the game directory is known, a missing game directory is logged as UnhandledException since it throws an exception. If there are no command line arguments at all the error is NoCommandLineArguments.

This should cover the different problems during startup.
I made a small tool to find the error code returned by LoadLibrary calls. This is needed because the engine doesn't report the error code for failed client.dll library loading.

Repository here: https://github.com/SamVanheer/NativeDllTester
Download: https://github.com/SamVanheer/NativeDllTester/releases/tag/V1.0

See the README for usage.

The issue that made this tool necessary: https://github.com/ValveSoftware/halflife/issues/2086

I also included additional information for finding missing dependencies using other tools, since the error code won't tell you which dependencies are missing.
I used Process Monitor to find it by looking for library load attempts after the client dll is loaded, this is what the relevant information looks like:
User posted image
The missing library is a debug version of a VS2017 redistributable library that contains the C runtime. The redistributable only contains the release version, and the client.dll was a debug build. Recompiling it as a release build solved the problem. If you have VS2017 with C++ support installed it will have installed this debug version so you would be able to run it regardless.

This should make debugging could not load library <name> errors much easier.
Posted 5 years ago2019-01-07 22:17:27 UTC
in playing with the console command list Post #341579
You can get the command arguments using cl_enginefunc_t::Cmd_Argc and cl_enginefunc_t::Cmd_Argv, just like you'd do it on the server side.

All the function member does is store the callback invoked for a named command, you can't control which one is executed based on inputs, you'd have to do that in the callback itself.

If your goal is to override an engine command conditionally based on what's passed into it you can create your own callback that stores off the original function and that forwards calls unless your condition is matched.

Unfortunately you can't add state to the callback since it's all basic C style code, so you'll have to either use global variables or create a lookup table to get a stateful object that will handle the callback. If you're going to do that registering the same callback for all your commands and then using the first argument to find the handler should do.

The complete definition for the engine's data structure is this:
typedef struct cmd_function_s
{
  cmd_function_s *next;
  char *name;
  xcommand_t function;
  int flags;
} cmd_function_t;
The flags member is used to keep track of where commands came from so they can be removed (client, server, GameUI). Otherwise, if a library gets unloaded the command can still be executed, but the callback will point to freed memory.

As for multi-line formatting, see this: https://twhl.info/wiki/page/TWHL:_WikiCode_Syntax#h5bfbcfa35e405
Referencing this here so everybody can see it: https://github.com/malortie/uncompleted-compilable-hl-opfor/issues/2

If anybody knows of tutorials for learning to work with the SDK, can you post them here?
It's crashing because an env_beam between two entities is trying to use an entity that's been killtargeted. It passes what is supposed to be the entity origin by reference to the engine, which then tries to dereference a garbage address. That's causing the crash.

I'm going to decompile the map and see how this entity setup works, i'm thinking a killtarget isn't working properly here.

Edit: i've found the problem. env_spritetrain isn't fully implemented so it never triggers the fire on pass targets for its path_corner entities, which are supposed to killtarget the alien beam effects. As a result the beam tries to use one of the alien ship entities, which is killtargeted when the ship hits a path_track of its own so it tries to pass a reference to the origin of that ship to the engine, which then causes a crash due to dereferencing freed memory.

To fix this env_spritetrain needs to be implemented. It should be relatively easy to do, inherit from func_train and internally create an env_sprite using MOVETYPE_FOLLOW to attach it to the train.
He forgot to add an import library. You can get it from the original Half-Life Github repository. Clone it and copy the file utils/vgui/lib/win32_vc6/vgui.lib to SourceCode/utils/vgui/lib/win32_vc6.

Edit: the developer just committed all of the missing library files, so you can just pull the latest commit.
I've never used it before, but after checking the source code i've found 2 problems:
  • The projects were never updated for V2017 so it has the "'abs': ambiguous call to overloaded function" errors
  • The last commit in the op4 branch restructured the folders without updating the project files
To fix this you can just open the vcxproj and filters files and manually change the paths so that each one points to the "gearbox" directory:
<ClCompile Include="..\..\dlls\aflock.cpp" />

becomes:

<ClCompile Include="..\..\gearbox\dlls\aflock.cpp" />

The same goes for ClInclude elements.

Both the gearbox_dll and gearbox_cldll projects need this done, and the filters files also need to be updated to reference the correct files. It's a bit of work, but once it's done it should work fine.

I've reported the issue, hopefully he'll fix it pretty soon: https://github.com/malortie/halflife/issues/16

Otherwise you can try reverting the last commit so the folder structure matches the project files.

I also found this, it might be a better option for you: https://github.com/malortie/uncompleted-compilable-hl-opfor
Posted 5 years ago2019-01-05 18:13:33 UTC
in SharpLife - Dot Net Core based modding p Post #341555
I've been working on the scripting system some more. I was going to rework CSharpScript's script class loading behavior to make it work more like the assembly based provider does things, but i discovered that there is no way to directly access the generated assembly's Reflection instance. I've requested that this be added: https://github.com/dotnet/roslyn/issues/32200

Once this is possible the assembly and CSharpScript providers should behave largely the same way. There won't be any need to return the type of the script class, and a callback in the script info instance will allow the user to resolve the correct class. This will make it much easier to implement per-entity custom classes by simply finding the class defined in a script that inherits from the right base class as shown in bonbon's image.

That callback should just do this:
private static Type FindEntityClass(CSharpScriptInfo info, Assembly assembly)
{
    var scriptClasses = assembly.GetTypes().Where(t => t.IsClass && typeof(TScript).IsAssignableFrom(t)).ToList();

    if (scriptClasses.Count == 0)
    {
        //No classes found that extend from the required type
        throw some exception;
    }

    if (scriptClasses.Count > 1)
    {
        //More than one candidate class found
        throw some exception;
    }

    return scriptClasses[0];
}
Where TScript is the type of the entity that it needs to inherit from.

The same thing will be possible using the assembly provider, which makes it easier to use both.

With this in place the game should have a simple scripting system that lets you load scripts once and then create instances of types when you need them like this:
//The script system should cache the loaded script so repeated calls return the same script instance
var script = Context.ScriptSystem.Load("my_script_file.csx");

//The script itself doesn't contain any object instances, these are created on demand only
var entity = script?.CreateEntity(Context.EntityList, typeof(Item));
It may be easier to rework the scripting system to only provide an abstraction for the provider and letting the user get the type information using the resulting Assembly, but this would restrict the supported languages to whatever can produce an Assembly instance.

In that case i can remove the type parameter from the scripting system itself, since the only purpose is to make it easier to access individual script object instances and it's better to let users keep track of them. Perhaps splitting the system up into providers and type instance factories would allow for both, i'll need to look into it further.

I'm also going to make sure that scripts can load other scripts using #load, which CSharpScript already supports but which requires me to provide a SourceReferenceResolver. This should let you split scripts up just like you can with Angelscript. That in turn means that this resolver will need to understand how the SteamPipe filesystem works. That shouldn't be a problem since the resolver class API is pretty easy to use, and should just pass through to the filesystem class i've already created for SharpLife.

Edit: i just discovered that there is an existing framework that does pretty much the same thing called Managed Extensibility Framework: https://docs.microsoft.com/en-us/dotnet/framework/mef/

It looks like it can do pretty much everything that this system can currently do, albeit without CSharpScript support. I'm going to look into it some more to see if it can do what i'm doing with my own system.

Edit: MEF can do everything that my system can do, and for the use case i have in mind for it each script assembly would need its own CompositionContainer which eliminates the problem with adding assemblies later on.

That leaves only one problem which is memory usage. I'll have to look into how much it uses for small assemblies like scripts. I read some reports about high memory usage due to dependent assemblies being loaded, but CSharpScript requires explicit references to all dependent assemblies so they have to be loaded already. Indirect dependencies might be a bigger issue but that remains to be seen. The biggest issue i need to look into is how much memory a CompositionContainer needs on its own.

Edit: Looks like an empty container is just 320 bytes in total, though that may not be 100% accurate. Looks like it's worth using like this then:

Assembly and CSharpScript based plugins are all part of one container, exporting their main script implementation using MEF. Required interfaces can be exported from the game and imported by plugins using MEF, allowing for dependency injection on both sides.

CSharpScript based map-specific scripts get their own container, with scripts being cached along with their container based on file modification time to allow easy reloading. This reduces memory usage by loading scripts only once per server lifetime. Since a compiled script is just an assembly it can't be unloaded, just like a regular assembly. Otherwise it'd leak tons of memory by way of redundant compilation and assembly generation.

Scripts that are reloaded, either due to a newer version existing on disk or by forcing a reload (cheat command) would throw out the cached version and reload it entirely. The old version would continue to be in memory and objects created from it will continue to exist and be referenced as before, so a complete refresh is needed to oust old versions (reloading the map should do it since it's a map specific script).

I'm going to look into reworking the scripting system to support this, so most of my current provider code will probably end up being thrown out. The script validation code and API configuration types are still needed so most of it will still see use. This will make it impossible to use non-CLR based languages for plugins or scripts, but i doubt that's a big deal since C# and VB are more than enough to support the required use cases.
Posted 5 years ago2019-01-04 13:18:24 UTC
in custom blood colours Post #341542
It's used for particles as well if i'm not mistaken.
I found this article while cleaning up my bookmarks: https://www.bluesnews.com/abrash/chap64.shtml

It goes into detail on how Quake's visibility calculation was developed and how it works. It's an interesting read.

There are more articles on this website, you can find them here: https://www.bluesnews.com/abrash/contents.shtml
Posted 5 years ago2018-12-29 09:36:38 UTC
in getting number of sprite frames on server Post #341521
Are you sure it's actually a sprite? If it's a studio model it will return what looks like the largest valid value for pev->body for that model.
Posted 5 years ago2018-12-28 22:34:49 UTC
in Sven Co-op forums shutting down Post #341509
Or are there other reasons? I remember the sour relationship you had with the members of the Sven-Coop development team. Anyway, you will be a little tempted to feel a kind of cruel satisfaction ... :crowbar:
I am in no way involved with this decision, and my only concern is that valuable information could be lost.
Posted 5 years ago2018-12-28 20:33:47 UTC
in Sven Co-op forums shutting down Post #341505
This doesn't seem a valid reason to shutdown when you can switch to a different more secure bulletin board platform...
That may have to do with the planned new forum that was to be written from scratch, but i don't know if that's still the plan.
Posted 5 years ago2018-12-28 18:23:29 UTC
in Sven Co-op forums shutting down Post #341503
https://forums.svencoop.com/showthread.php/46303-Forums-shutting-down-March-2019

The forums are getting shut down soon, so i'd advise anyone who needs information hosted there to back it up, maybe rewrite it as a wiki article here.
Posted 5 years ago2018-12-26 16:47:36 UTC
in SharpLife - Dot Net Core based modding p Post #341490
I'm currently working to make SharpLife.Scripting more flexible when using CSharpScript to make it usable for client side scripting. Currently scripts can access whatever they want so they can use things like reflection, filesystem APIs, etc, which can cause security issues. To prevent this from happening, i've been working on a way to explicitly register an API that scripts can access.

It currently looks like this:
private static void ConfigureAPI(APIConfigurationBuilder builder)
{
    builder.AddAssembly(typeof(App).Assembly);
    builder.AddNamespace(typeof(App).Namespace);

    {
        var type = builder.AddType<App>();
        type.AddMethod(a => a.Test(default, default));
    }

    {
        var type = builder.AddType<string>();
        type.AddMember(() => string.Empty);
        type.AddMethod(s => s.Split(default));
    }
}
First you add assemblies that you want scripts to be able to access. Then you can add namespaces and types to grant or deny access to.
Types can have properties, fields and methods added where you use lambdas to specify the member being added.

It's possible to specify whether you're adding something to allow or deny access, or to inherit the default access settings for that particular type of API functionality (namespace, type or member). These settings can be configured, whichever access rule is set when the configuration is built will be used for all types that don't specify a rule to use.

As you can see in the example you can select method overloads. Since you only need type information and not actual values use of default and default(type) is pretty common.

Once a configuration has been finished it becomes immutable, preventing changes from being made to the API.

By default, an empty APIConfiguration instance denies access to everything except built-in types (int, float, bool, string, etc).
If a script accesses an API that it isn't permitted to access the script is never compiled, so no code is generated and no static initializers will run, preventing any potentially malicious code from executing.

Malicious code includes but is not limited to using filesystem APIs to create files that interfere with client side game functionality, using reflection to access engine functionality to manipulate game behavior (e.g. slowhacking), and using web APIs to download other files that could be used to execute more malicious code.

A properly configured API configuration can prevent abuse while still allowing access to any APIs that are needed. It is possible to block access to parts of types that can be abused, for instance you could allow querying of types and their members, but not allow any invocation of members through reflection.

API configurations are separate from script providers, so one configuration can be used by multiple providers at the same time (e.g. CSharpScript and Python).

I'm still working on it, but most of the functionality is already working. I'm using the Roslyn API for code analysis to check parsed scripts, this API allows for much more than that so it's pretty interesting.

Does this look like it makes sense? Is it easy to understand the purpose and use of this functionality, the reason why it's necessary? Does anyone have any suggestions to improve it if possible?
Posted 5 years ago2018-12-21 11:06:50 UTC
in VHLT source code cleaned up Post #341472
It's easier to write everything in the same language so the code can be shared. The engine's data structures for BSP data can also be used by the compiler, so i can increase limits very easily.
It's also much easier to provide debug info when the compiler breaks, and it's easier to maintain this codebase since it's designed and written at by one person in one go, without a bunch of #ifdefs and varying code styles.

Also, C is very poorly suited to writing tools and games these days. A fair amount of code in these tools does nothing more than free memory, something that even C++ could do automatically through deterministic destructors.
Posted 5 years ago2018-12-14 08:42:21 UTC
in SharpLife - Dot Net Core based modding p Post #341450
I've never used Unity before, but i already planned to do something similar. If you look at Sven Co-op's scripting API you'll see that the design is fairly similar, my design for SharpLife is a refined version of that. I'll need to look into making it easier to make single class scripts that are automatically associated with an entity since it's currently designed to use single scripts for entire maps.
Posted 5 years ago2018-12-13 20:49:23 UTC
in VHLT source code cleaned up Post #341446
I'm posting this separately for visibility: does anyone have a large map that takes a bit (~5 or more minutes) to fully compile using a full run of all tools? I'm going to run a performance analysis tool to find the code that's taking the longest so i can see if anything can be optimized. Ideally with default textures only so i don't need to reference other wad files.
Posted 5 years ago2018-12-13 16:17:43 UTC
in VHLT source code cleaned up Post #341444
I've been keeping track of code i've found that could be optimized in the original version, here's what i've got so far:
  • The aforementioned use of a reader-writer lock to ensure thread-safe access to the planes array eliminates potential garbage reads
  • A fair amount of code leaks memory by not freeing the faces list created by MakeBrushPlanes. Using a std::unique_ptr that frees the memory eliminates the leak, using the function FreeFaceList as the custom deleter function. Though i've never heard of CSG running out of memory, in a shared process this does matter a lot more, especially once RAD has started. This isn't a problem in my version of the tools because it's garbage collected
  • The faces list could be converted to a std::vector<bface_t> to automatically manage the memory and to allow faster iteration by storing the faces by value. Note that this will increase the cost of sorting the sides (the entire face needs to be copied, instead of just changing its position in the list), but that cost may be offset by the speed increase resulting from all faces being located in the same memory range, and thus more likely to be in the CPU cache. Sorting is only done once per brush per hull (4 hulls) at the most (disabling clipping can skip the whole process, some contents types don't create faces at all either), but iteration is done many times, and additional faces are added during the brush expansion process
  • Origin brush processing and zhlt_minsmaxs handling both use the CreateBrush function to get the bounding box from a brush. However CreateBrush does a lot more work than just calculating the bounding box, work that can be avoided entirely thus speeding up the parsing of the .map file. If properly reworked the number of planes generated can also be reduced by not letting these two features add planes to the array used by the actual brush creation operation. This can reduce the chances of hitting the internal and/or engine limit on the number of planes, since the planes that contain origin and zhlt_minsmaxs brushes are not often going to be used by other brushes
  • Merging CSG and BSP will cut down on work done to save and load the data generated by CSG, speeding up both tools and eliminating a possible data corruption issue if the plane data is modified between CSG and BSP execution
  • As is the case with the engine, there's code that uses a varying number of (-)99999 to denote maximum/minimum values, which can cause problems when the engine limit is increased and larger maps are possible. For instance the function isInvalidHull uses 99999 as its limit, so brushes that are partially or completely beyond this limit can end up marked as invalid. Using <cstdint>'s limits constants INT_MIN and INT_MAX appropriately should solve the problem
That's right, those entities aren't removed from the entity data after RAD's done.
If you look at maps like the Infested series for Sven Co-op you'll see that it uses the "second train" approach. This does cause issues when one of them gets blocked, for instance the elevator in the first map can end up out of sync with the doors stuck closed or in the wrong place.
Texlights are only possible if you have the original lights.rad, since lightmaps contain lighting data from multiple sources. You might be able to determine which textures were given a texlight value based on lighting values emitted near instances, and approximate the values but it's never going to be as accurate as you want it to be.
Posted 5 years ago2018-12-12 12:57:10 UTC
in VHLT source code cleaned up Post #341435
Yeah we'll just have to wait and see for performance.

I've been converting CSG's code and i've noticed some areas where performance can improve by not having to write stuff to files. CSG normally writes plane data to 2 files; the BSP file and a separate .pln file. BSP loads the .pln file or if it doesn't exist, converts the data from the BSP file.

Since this data is now shared directly it eliminates the file I/O overhead and the need to support conversion from the BSP file.

This is probably why CSG was merged into BSP for Source.

I've also found a comment indicating a possible race condition in in the plane generation code: https://github.com/SamVanheer/VHLT-clean/blob/remove-xash/src/zhlt-vluzacn/hlcsg/brush.cpp#L28

There is a possibility that one thread is modifying the plane list while another is reading from it, which could cause it to read garbage.

To fix this the code needs to use a reader-writer lock: https://docs.microsoft.com/en-us/dotnet/api/system.threading.readerwriterlockslim?redirectedfrom=MSDN&view=netframework-4.7.2

That's the best solution for this type of problem, it eliminates any race conditions and garbage reads but it'll slow it down a bit to acquire and release the lock.
Posted 5 years ago2018-12-10 09:30:19 UTC
in VHLT source code cleaned up Post #341414
The tools are already running as much as possible on worker threads. The advantage of using GPU calculations is that the GPU has cores made for mathematical calculations and can handle the spreading of work by itself since it already does that to optimize graphics rendering.

I don't know how much difference there could be in terms of performance since i can't find anything on that, so we'll have to wait and see how it works out. I should be able to get the CSG threading part done soon, then i can do some tests there to see how it compares. unfortunately since CSG is usually done so fast it probably won't have much difference in performance, it may even be slower due to overhead in creating GPU resources. RAD is really where it takes the longest, so that's where it can come in handy.

I'm not sure why having more cores wouldn't have an effect, it could be that each core's frequency is lower so it doesn't have as much effect. Perhaps the OS is under-reporting the processor count so it might not be taking advantage of all cores, but you should see that in the compiler settings output.

It's also possible that the worker thread code isn't optimized for a large number of threads and is locking too much to take full advantage of what it gets, i'm not sure.
Posted 5 years ago2018-12-10 07:45:40 UTC
in VHLT source code cleaned up Post #341412
There are a lot of things in this design that makes it faster. It only has to parse in the command line once, data is passed between tools directly instead of having to write it all out to files and then parsing it back in, and there are no fixed size buffers so it's easier to remove stuff without having to touch and move a bunch of lists.

Also, memory allocation in C# is much faster than in C++: http:/codebetter.com/stevehebert/2006/03/03/raw-memory-allocation-speed-c-vs-c

The articles and discussions i've found on this are pretty old and Core is known to be much faster and more efficient so it may actually be the best tool for the job.

If i can use hardware acceleration for maths-intensive stuff like VIS and RAD it'll go even faster. You can then compile maps on your GPU.

I'd also consider any slowdowns to be worth the cost is it means having tools that you can actually understand the inner workings of and modify as needed. Very few people know how to make adjustments to these tools.
Posted 5 years ago2018-12-09 14:47:54 UTC
in VHLT source code cleaned up Post #341408
I think i've nailed down the design of the compiler and tools to make it easy to use as a library as well as a command line program:
ILogger logger = ...;
var providers = new ICompileToolProvider[]
{
    CSGTool.Provider,
    BSPTool.Provider,
    VISTool.Provider,
    RADTool.Provider
};

using (var compiler = new MapCompiler(logger, providers))
{
    try
    {
        compiler.Compile(mapData, sharedOptions);
    }
    catch (CompilationFailureException e)
    {
        logger.Error(e, "An exception occurred while compiling");
        throw;
    }
}
The compiler takes a logger and a list of providers of tools. A tool is provided by a provider that specifies the tool name, its type and can create instances of the tool.

Tools can be configured using an initializer:
var provider = CSGTool.Provider.WithInitializer(csg => csg.HullData = myHullData);
This creates a provider that invokes an initializer function on the tool. In this case the hull data used by CSG is overridden to use your own settings, but more options will be available for each tool.

This approach also allows you to create custom tools that can be inserted between other tools. As long as your tool returns the correct data type for the next tool this will work just fine. You could use this to add tools like analyzers or something that can output the data returned by a tool. For instance, if RAD were to return lighting data that isn't in the BSP file as its result you could output it to a file.

As far as command line usage goes the library i'm using doesn't support the older -option value syntax, only -ovalue or --option=value. Since you'd need a new interface to use this compiler properly anyway i don't see this as a problem. With this compiler you only need to invoke the frontend once to run all the tools you want, so having different syntax can actually prevent misuse by erroring out on the old syntax.

This also lets me use better names for options. Whereas the old tools would often have -no<whatever> arguments, this one just has --<whatever>, for instance -noresetlog becomes --resetlog.

I'm trying to avoid doing any file I/O in the compiler itself, so any files that need loading will be loaded either before the compiler is created or in the initializer for the tool that needs the data. If it's output data then it will need to be handled as a post-tool operation, possibly a custom tool.

This also means that any data that is being output to a file should be available to you programmatically when invoking the compiler as a library. This can make it much easier to access some data, like point file output.

As far as log output goes, i'm currently using the standard Serilog message template, modified to include the name of the tool that's logging the data:
[14:54:40 INF] [MapCompilerFrontEnd] test
The format is:
[time log_level] [tool_name] message newline exception
Since this drastically alters the output in log files i'll probably add an option to remove the extra information and just log what the old one does.

I've looked into how work can be spread over multiple threads, and it looks like C# has a class for that: https://docs.microsoft.com/en-us/dotnet/api/system.threading.threadpool?view=netframework-4.7.2

It's possible to configure this to match the original's settings, then it should be a pretty simple task to dispatch work and await completion. However, thread priority is something that should not be changed according to what i've found since this class has some monitoring system to help manage how threads are used. It may not even be necessary to change the priority if it's able to see other processes so that will require some closer inspection once there's some work being done.
Posted 5 years ago2018-12-08 16:40:40 UTC
in VHLT source code cleaned up Post #341406
I'm looking at the code that handles .map file loading and there's special code for QuArK style texture coordinates. However, according to its official documentation this is only used when you use the "Quark etp" format. The Valve 220 format uses the correct values.

I'm bringing this up because the code used to handle this has a pretty substantial presence in the map loading and CSG stages. The script tokenizer has to handle it specially and CSG has to convert texture coordinates differently for this format, which means the map data structures need to store off 2 different formats.

I found that Quark saves .map files correctly when used in "Half-Life" mode, which is when you select it as the active game: http://quark.sourceforge.net/infobase/maped.tutorial.starting.html#entering

So i will not be porting the Quark specific //TX# way of specifying texture data in those files. If these tools ever get finished you'll need to properly configure Quark for Half-Life to use it.
Posted 5 years ago2018-12-04 17:12:11 UTC
in SharpLife - Dot Net Core based modding p Post #341386
NET Core 3's preview has been released, so i can start doing some experiments with model viewer's design in Core: https://blogs.msdn.microsoft.com/dotnet/2018/12/04/announcing-net-core-3-preview-1-and-open-sourcing-windows-desktop-frameworks/

I'm also working to get the Tokenizer class ready for use with .map file reading, it's now flexible enough to do all that. I've also eliminated memory allocation overhead involved with creating lists to configure it, now you can just cache the tokenizer configuration instance and reuse it.
Posted 5 years ago2018-12-03 13:19:36 UTC
in VHLT source code cleaned up Post #341383
Yeah i'm giving it a shot:
User posted image
Each tool is a library, the MapCompiler library provides a compiler, the MapCompilerFrontEnd is the command line interface to it.
You could invoke the compile programmatically with this. You could potentially generate BSP files dynamically this way, though it would still take time to compile everything.

I need to rework the Tokenizer class first so it can read .map files properly since apparently QuArK embeds texture info in comments and stuff.
Posted 5 years ago2018-12-02 17:05:39 UTC
in VHLT source code cleaned up Post #341376
I've cleaned up the VHLT V34 source code to make it easier to read: https://github.com/SamVanheer/VHLT-clean

The branch remove-ifdef removes all ifdefs for features. So where originally there was a #define feature and #ifdef feature #endif there is either just the code the ifdef surrounded, or if the feature was disabled the code was removed.

There's also a branch remove-xash that removes all Xash specific code.

I ran a tool that counts lines of code on it, here are the results.

For the original codebase with no changes:
Languagefilesblankcommentcode
C++515437404650560
C/C++ Header339643785533
Total56093
For the cleaned up version in the remove-xash branch:
Languagefilesblankcommentcode
C++514512327235487
C/C++ Header339373474045
Total39532
So about 16500 lines of code in the regular version of VHLT is dead code, never used at all. Some of it is Xash specific, but it's not that much.

Some features were disabled, so their code was removed. Here they are:
  • ZHLT_DETAIL: some kind of old func_detail variant, was already obsolete
  • ZHLT_PROGRESSFILE: never implemented beyond command line argument handling, so didn't work at all
  • ZHLT_NSBOB: the only thing i found was the definition, no code appears to exist for it
  • ZHLT_HIDDENSOUNDTEXTURE: would allow you to mark faces as hidden by setting zhlt_hidden on an entity. You can do this by ending a texture name with _HIDDEN, so i guess it was obsolete
  • HLBSP_SUBDIVIDE_INMID: seems to be intended to reduce the number of faces, but contributes to AllocBlock:Full errors so it was disabled
The cmdlib.h header is where all of these definitions were, it used to be 712 lines and is now 172 lines. There are 2 definitions left in place because they depend on platform specific functionality.
One is game_text's UTF8 conversion support, which relies on a Windows API function. It's not that hard to replace it with a cross platform alternative.

The other is Ripent's -pause parameter which was implemented only on Windows for some reason. This may have to do with the fact that it's implemented using atexit, so it may not work on Linux due to differences in how console resources are managed during program shutdown. Reworking Ripent's code to avoid use of exit should solve this problem.

I don't see any more #define statements used to control features anywhere so i guess it's all cleaned up now.

To make this process easier i used some tools to speed things up. First i used sunifdef to process all files and remove definitions one by one. I wrote a batch file that does this and also commits all changes to Git automatically, so i could just do removeifdefs.bat <definition name>. You can find the batch file in the remove-ifdefs branch in src.

Note that to remove disabled sections you must modify the batch file to pass -Udefinition instead of -Ddefinition or it will turn on the feature.

All in all this took about an hour and a half to do.

My reason for doing this is that the ZHLT/VHLT source code has never been readable, you have to read past the definitions and note which ones are active. More recent versions of Visual Studio do a lot of work for you but it's still hard. For example, the file wadpath.cpp is 92 lines now, but was 174 lines before. That's nearly twice as long, containing code that isn't even used.

wadinclude.cpp is even worse. It used to be 212 lines, now it's 5 lines and there's no actual code left in it. This is because are 3 or more different versions of the same feature (wad inclusion) in the source code. Various long files are much easier to read now that they've been cleaned up.

I hope to use this to rebuild the codebase in C# so that the tools can be integrated more easily, and perhaps deal with some issues that exist in the current implementation. I don't know whether i'll have time to do it or not, but in any case, this cleaned up version is available to anyone to check out. I will not be making a build of this since it's identical to the original V34 build, if you want one you'll have to make it yourself.
I assume you mean jmf, not vmf since that's Source?

There is no easy way to merge WADs based on used textures as far as i know, so you'll have to do it manually.
Posted 5 years ago2018-11-26 16:18:43 UTC
in SharpLife - Dot Net Core based modding p Post #341330
I've made a standalone scripting system that makes it easy to load scripts from various sources: https://github.com/SamVanheer/SharpLife.Scripting

Comes with support for assemblies and CSharpScript. The design makes it easy to add support for other scripting languages as long as you can do interop with it.

It's a separate repository so it doesn't fall under the HL SDK license.

The sample program i've included with it is a basic WPF app that lets you load scripts and display the Description property that all sample scripts must provide. Unfortunately since this is a NET Framework program it pulls in a lot of dependencies, and indirect dependencies don't seem to work so i had to explicitly reference some NET Standard libraries.

Hopefully when NET Core 3 is released i can move the sample to Core to avoid those issues.

I plan to integrate this system into SharpLife and the new model viewer. Model viewer will use this to add support for extension plugins and scripts (e.g. add a new tab with options) which should let you add whatever you want to modify models. Since CSharpScript is plain text it should be trivial to implement small add-ons this way.
Posted 5 years ago2018-11-25 15:24:25 UTC
in TWHL Modded Minecraft Server Post #341323
Thanks, it's working now. I added an underground smeltery area to the spawn. It uses ProjectRed furnaces which unfortunately can't be automated, but it looks pretty neat.
Posted 5 years ago2018-11-25 14:37:44 UTC
in Disabling Spectator modes Post #341318
Like potatis_invalid said, you need to focus on Observer_HandleButtons. Remove the if statements that handle the modes you want to disable and make sure the other ones don't try to set those modes.
Posted 5 years ago2018-11-25 13:03:32 UTC
in TWHL Modded Minecraft Server Post #341316
The server needs a restart, Refined Storage is glitched and refuses to show items.
Posted 5 years ago2018-11-24 14:27:06 UTC
in TWHL Modded Minecraft Server Post #341312
I went to Mars and Asteroids and got Desh and Titanium, so we can build end-game Galacticraft stuff now.
Yeah your best bet is ripenting the entity out after compilation.
Posted 5 years ago2018-11-23 15:43:13 UTC
in TWHL Modded Minecraft Server Post #341304
I went to the Moon and got some meteoric iron. There's now iron growing down below so you can get more.
Posted 5 years ago2018-11-18 20:36:35 UTC
in Weapon Strip (HUD question) Post #341271
Yeah that looks good.

If you want to paste blocks of code you should use pre tags: https://twhl.info/wiki/page/TWHL:_WikiCode_Syntax#wiki-heading-16
Posted 5 years ago2018-11-18 18:17:38 UTC
in Weapon Strip (HUD question) Post #341268
In this if check: https://github.com/ValveSoftware/halflife/blob/5d761709a31ce1e71488f2668321de05f791b405/cl_dll/ammo.cpp#L579

Add m_pWeapon = NULL;

You may also want to set it to NULL a bit lower if the retrieved WEAPON* is NULL so the last weapon's ammo HUD isn't left visible if no definition exists.

I haven't tested it but this should work.
Posted 5 years ago2018-11-16 19:24:23 UTC
in TWHL Modded Minecraft Server Post #341255
It's kinda too bad we don't have Mekanism. It has a proper teleporter that could make traveling around a lot easier, especially between dimensions.
Posted 5 years ago2018-11-12 16:25:19 UTC
in TWHL Modded Minecraft Server Post #341236
Looks like it's working, but it lost the last few changes i made. Nothing important, i was just digging a little.
Posted 5 years ago2018-11-12 13:21:40 UTC
in TWHL Modded Minecraft Server Post #341234
I think something's wrong with the server. I was on it and then i timed out, now i can't rejoin and i keep timing out, and it says somebody else is on it but i can't see who it is. It might think i'm still connected.
Posted 5 years ago2018-11-10 11:23:36 UTC
in TWHL Modded Minecraft Server Post #341225
The Twitch client is pretty easy to use. Just import the profile i made and launch it.
User posted image
I made a bunch more disks to make sure the system won't run out of storage space. There's more than enough power coming in (2x9 level 6 solar panels) so it should run for a few weeks before needing more space.

I've also built a steam turbine for raw power generation (25K+ RF/t). To turn it on just activate it from the controller on the side and flip the lever to enable steam generation.

To make getting down there easier i've added elevators. They're the white blocks in front of the central ladder, jump to go up, sneak to go down. Each floor has a sign showing what's on that floor to make things a bit easier.

There's also a clearing quarry RFTools Builder on the ground floor that you can use to rapidly clear out large areas.
Posted 5 years ago2018-11-07 20:47:55 UTC
in TWHL Modded Minecraft Server Post #341216
I built a small starting area around the spawn. I'm not sure if it covers the entire area but it's easy enough to get in.

There are 6 rooms with equipment to get started with: it's got armor, a sword, a crate of supplies that you can pick up and move and a bed.

Since the spawn is next to a preexisting tower part of the wall connects with it, monsters may fall in that way. Apparently the tower connects to an underground crypt of some sort, i bumped into it while building the walls. The razor wire on top of the walls will damage you if you touch it. I may electrify it to deal more damage to spiders trying to climb over.

There's a capacitor powered by 2 solar panels that you can tap power from if you need some, but please don't take it with you.
Posted 5 years ago2018-11-05 16:22:07 UTC
in TWHL Modded Minecraft Server Post #341198
It's accepting my seeds now, thanks :)

I've updated the Twitch mod profile: https://www.dropbox.com/s/0s21tbt9y1dxs7p/TWHL%20ModPack-1.0.zip?dl=0

I'm using this to play, it works fine.
Posted 5 years ago2018-11-05 15:22:59 UTC
in TWHL Modded Minecraft Server Post #341196
It looks like you forgot to turn on compatibility for Mystical Agriculture in Immersive Engineering's config file. I can't put the seeds in the cloches.

File "config\immersiveengineering.cfg" needs it turned on in the section "compat".

For good measure the client pack should also be updated so it's consistent. It shouldn't break anything on the map.
Posted 5 years ago2018-11-04 10:52:15 UTC
in Safe Entity Threshold Post #341178
The maximum number of edicts is 2048, any more than that and the game will crash when the engine tries to network an entity with an index higher than 2047.

You can also specify this in liblist.gam using "edicts". The "commandargs" key no longer works, so don't use that.
Posted 5 years ago2018-11-04 10:47:16 UTC
in TWHL Modded Minecraft Server Post #341177
I'm unable to connect to the server. The error message says connection refused with no further information.

I've tried using both a Twitch based install and one installed using your guidelines.
This post was made on a thread that has been deleted.