Also, highsat build has several options to tune performance and memory consumption during FM-index generation.
So it might be possible to use less memory (if that is really the problem). I would start with setting the additional parameters:
--noauto --bmaxdivn 8 --dcv 2048
And gradually increase, but that will also increase run-time.
From the manual:
-a/--noauto
Disable the default behavior whereby hisat2-build automatically selects values for the --bmax, --dcv and [--packed] parameters according to available memory. Instead, user may specify values for those parameters. If memory is exhausted during indexing, an error message will be printed; it is up to the user to try new parameters.
--bmax <int>
The maximum number of suffixes allowed in a block. Allowing more suffixes per block makes indexing faster, but increases peak memory usage. Setting this option overrides any previous setting for --bmax, or --bmaxdivn. Default (in terms of the --bmaxdivn parameter) is --bmaxdivn 4. This is configured automatically by default; use -a/--noauto to configure manually.
--bmaxdivn <int>
The maximum number of suffixes allowed in a block, expressed as a fraction of the length of the reference. Setting this option overrides any previous setting for --bmax, or --bmaxdivn. Default: --bmaxdivn 4. This is configured automatically by default; use -a/--noauto to configure manually.
--dcv <int>
Use <int> as the period for the difference-cover sample. A larger period yields less memory overhead, but may make suffix sorting slower, especially if repeats are present. Must be a power of 2 no greater than 4096. Default: 1024. This is configured automatically by default; use -a/--noauto to configure manually.
Yes, try --bmaxdivn 8 then 12, 16, etc. According to the documentation, this is a fraction so 8 means 1/8. If it is still killed I would try to increase the number further. With setting it to 1, it has in fact used more memory. It might also help to decrease the number of CPUs used for indexing, because each CPU might require its own shed of memory. If all that does not help, you need to have chat with your local IT support about how to monitor resource consumption. Last resort could be to not use --ss and --exon annotations at all, these seem to increase memory usage but comes at the price of not having annotated splice sites.
This pretty much has to be memory-related. You could investigate yourself by monitoring memory usage throughout the run.
It is easy to think that 512 Gb has to be enough, but it really doesn't. It says that for human genome doing the same thing you are doing requires at least 160 Gb:
See also: https://stackoverflow.com/questions/726690/what-killed-my-process-and-why for the background on how processes get killed and how to find out why. So it is almost certainly a memory issue.