spark is able to report and calculate a number of different metrics.
In all cases, the source data for the metrics comes from elsewhere. If something seems wrong, it is likely because the raw data spark receives is incorrect.
|Metric Name||Data Source|
|TPS||Server event (via spark's |
|MSPT||Server event (via spark's |
|CPU Usage||Java API (jdk.management/OperatingSystemMXBean)|
|Memory Usage||Java API (jdk.management/OperatingSystemMXBean) & |
|Disk Usage||Java API (java.base/FileStore)|
|GC||Java API (jdk.management/GarbageCollectorMXBean)|
|Player Ping||Server API (via spark's |
|OS name and version|
Containers and Docker
Occasionally, we see some metrics (mostly CPU/Memory Usage) being misreported when the server (and by extension spark) is running inside a container (Pterodactyl, etc).
There's not much spark can do about this. As you can see above, spark just uses the standard Java and OS APIs to obtain raw metrics data. If it's not accurate, then this is either a problem with your setup or a Java/Docker/OS bug.