Replies: 35 comments 7 replies
-
Figure out how exactly to bundle or include spark: for example, we can create a new timing using Spark's Spark.createTask() method, which takes a string as a parameter representing the name of the timing task. We then start the timing using the startTiming() method, and run some code. Finally, we stop the timing using the stopTiming() method. |
Beta Was this translation helpful? Give feedback.
-
I feel like one requirement for a replacement should be to have equally user-friendly names for entries like Timings has right now which is a major benefit of it over Spark: That someone who doesn't know anything about internals is able to quickly spot what is wrong and not have to bother data that doesn't matter for them. Having just the (sometimes confusing or even wrong) Mojang names of classes/methods doesn't really help people that are not developers and right now Timing names also provide additional context like the type of entity or the default tick counts that are currently not accessible via Spark or (easy) other means.
Basically I'm counteracting this point. Spark profile entries are a lot harder to read than timings entries are. Timings (in contrast to spark) even provides the parts that cause issues directly at the top as well as point out lag directly, something that Spark doesn't do (at least as far as I know) If anything the Mojang profiler information (which is in places more granular than timings) should be used. (Or could even be pulled into timings directly to solve the "Timings has been unmaintained for mulitple years now" part) Other requirements (besides better user-friendlieness/usability) for a new system that I see that Spark doesn't provide (right now):
|
Beta Was this translation helpful? Give feedback.
-
You should consider contributing to Spark to bring it closer to feature parity with Timings, as the Timings library has been unmaintaned for quite some time now (see OC issue) and only becomes more archaic and harder to work with as time goes on. Spark, on the other hand, is still being actively maintained and having features added. Other Paper forks like Pufferfish/Purpur have already replaced their timings with Spark, and do not seem to have lost any functionality or ease-of-use. |
Beta Was this translation helpful? Give feedback.
-
@regulad I fail to see how your comment helps at all with the discussion how Spark can/should be included in Paper directly and how it compares to Timings. Of course if Spark should be used directly and changes need to be made to bring it up to the usability of Timings then that could be contributed directly to their project (although it would probably still be necessary to have some changes be done directly in the server and not just Spark alone, e.g. Spark would not be able to automatically know the context that their data is gathered from without markers directly in the server code indicating that in a human-readable way similar to how Timings work right now) |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
Antivirus software is generally useless about jar files and often just flags stuff somewhat blindly at best. I don’t think that the malicious result from a singular piece of software stands for annything against an open source project in which you can vet every single line of code in the output. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
This comment has been minimized.
This comment has been minimized.
-
After the migration would there be a way to re-enable timings? I personally find Spark completely ineffective for actually finding sources of lag. Right now, though it's touted as being beginner-friendly, Timings is basically readable by people who can't code at all- and Spark lacks valuable integrations with plugins such as Skript. Lag was such a big problem with my server that I had to switch Papermc forks because Purpur disabled timings. If you do decide to switch Timings with Spark, would there be a commandline flag or plugin that would re-enable it? |
Beta Was this translation helpful? Give feedback.
-
As often pointed out when this point is made it would be good to bring your feedback to the Spark developers directly regarding what you think is missing from it to make it useable for you personally. While I personally also agree with parts of your statement (and have forwarded my issues that I see to Spark devs already, especially the plugin-integration thing) it will at some point probably not be possible to re-enable it seeing as completely removing it adds more value (as the devs would no longer need to maintain and potentially work around it) than leaving it available but in a disabled state. |
Beta Was this translation helpful? Give feedback.
-
I have spark installed on my papermc server (via plugins folder) but I still get the spark warning on startup as of 1.20.1. Is there something more I have to do to configure it? |
Beta Was this translation helpful? Give feedback.
-
The warning is because timings is still enabled, if you want to get rid of it you'll need to disable it in the config. |
Beta Was this translation helpful? Give feedback.
-
The big problem with Spark is that it is not compatible with Java OpenJ9... In addition, when I read a report from Spark I do not understand the terms in the proposed tree structure. However, I have been administering my server for 13 years, but despite my little experience I find no useful information for someone who does not know the meaning of the classes or terms given by Spark (in fact I do not even know what this is when I see for example "libc.so.6.__futex_abstimed_wait_common()" ). How to read such a report without being a developer? Spark therefore brings together 2 big problems, and not the least... |
Beta Was this translation helpful? Give feedback.
-
Paper is not OpenJ9 compatible too |
Beta Was this translation helpful? Give feedback.
-
I've been using Paper on OpenJ9 for years and it works perfectly. |
Beta Was this translation helpful? Give feedback.
-
OpenJ9 is slower aint it? |
Beta Was this translation helpful? Give feedback.
-
This issue is for planning and discussing Spark's future integration into Paper, not HotSpot's ability to run on docker or OpenJ9's performance metrics. |
Beta Was this translation helpful? Give feedback.
-
Except that many servers run on OpenJ9 because OpenJDK has a significant memory leak due to docker, especially among hosts using Ptero. So if Spark does not run on J9, by making the decision to integrate Spark instead of timings, you will put all these servers in difficulty. This problem is however known (an example): For example, on my server, like many others, OpenJDK always reserves more and more memory, without ever lowering its allocation until the server crashes. If I currently install OpenJDK, my RAM consumption will increase to 13GB in a few hours and reach 16GB in approximately 12 hours. The same server on OpenJ9 consumes a maximum of 5-6GB and the memory allocation is perfectly managed. The server can run for 2 weeks without reboot (for example, even if in reality it restarts 1 time per day, but it would be possible) which is inconceivable on JDK and this is independent of the server itself. Ignoring this and brushing it aside is irrelevant, especially since J9 compatibility does not seem infeasible since apart from Spark, all the other plugins that I have used in 13 years on my server are compatible (actually 85, except Spark). So yes, it's the same subject, whether you like it or not :/ |
Beta Was this translation helpful? Give feedback.
-
I've not heard of a memory leak inside of OpenJDK, just a general misunderstanding of how javas memory model works (and the odd once in a blue moon native leak nobody has ever managed to figure out), and its application of such when running in a resource-constrained environment. But, even if there was a leak, that generally does not change the fact that we do not support non-hotspot JVMs. You are more than welcome to use them, many dozen people run paper inside of Graal or OpenJ9, however, we have 0 inclination to impede the rest of the community for a small % of servers running in an unsupported environment. We're not going to go out of our way to hamper running those environments, but, timings is a tool that has generally outgrown its utilization, especially in an era where a few ms a tick is a sizable hit vs years ago when the performance cost of timings far was far outweighed by its utility, that is far from the case in this era. You'd do better pleading to the async-profiler folk to consider supporting non-hotspot environments, but I have no idea what APIs IBM have butchered/removed in their conquest. |
Beta Was this translation helpful? Give feedback.
-
I understand but I would like to point out that the servers experiencing this problem have nothing to do with it, they depend on the choices of their server hosts and for ordinary people, it is impossible to know in advance if this or that host will have an environment incompatible :/ In the end just need to make Spark compatible (I posted the issue on the Spark Github, wait and see now...), after all the other plugin developers are getting there so it shouldn't be insurmountable. |
Beta Was this translation helpful? Give feedback.
-
It's not our role to go out of our way to cater towards hosts enforcing unsupported environments onto people, especially ones which are commonly known within the java community to come with their own host of headaches and potential caveats/issues. timings is no longer a small inconsequential % of the servers mspt and is slowly becoming less and less useful over the years due to mojangs complexity meaning that we'd need to drastically increase timings coverage for it to remain to be useful, which in turn would come with get another hit to performance. theres some interesting stuff in the lineup for async-profiler in which should hopefully get us much closer to timings in terms of its easy of understanding what certain parts of the tick loop do, async-profiler does seem to support OpenJ9 to some degree, theres a few j9 related (closed) issues on their tracker, generally looks like a bunch of it is "we need j9 to fix this" |
Beta Was this translation helpful? Give feedback.
-
I should mention that I am able to start and run a server running OpenJ9 with Spark installed just fine, use the profiler and everything, so there seems to be some other external environmental issues at play than just "it crashes on startup". |
Beta Was this translation helpful? Give feedback.
This comment was marked as spam.
This comment was marked as spam.
-
Probably, however, out of 90 plugins, the one and only one that just needs to be added so that the server crashes when starting with openJ9 is Spark. So it's obvious that there is a problem with this particular plugin. I understand that you do not want to lift a finger about this problem, I will not insist any further since it is of no use and you do not seem to understand. But whether you like it or not, there is a real problem with Spark and J9, it is obvious, clear and precise. You don't have to leave a big school to realize this. Best wishes |
Beta Was this translation helpful? Give feedback.
-
It is weird cause my old server host used to not work with spark and it would crash on startup each time, but my new server host works perfectly with it. I made this issue on the spark github, but it was never addressed. lucko/spark#377. The server host it would crash on was Berrybyte and the server host it works on is Surf Host with both being very small server hosts. I have tested running spark on my own computer with a backup of my server and it works perfectly fine so it must have something to do with how certain server hosts are set up. |
Beta Was this translation helpful? Give feedback.
-
Spark is much better than timings overall + timings has a performance overhead that spark doesn't seem to, that being said the java agents spark uses are apparently going to be removed in a future JDK.. |
Beta Was this translation helpful? Give feedback.
-
Why should Spark be bundled with Paper? It would be much better if the admin could choose to install Spark as a separate plugin when needed, which is how it's currently done. If Spark is bundled with Paper, updating Spark would require updating Paper too, which isn't always feasible due to the inability to upgrade to a new version (for example, some are still using the 1.20.4 or 1.20.6 core instead of 1.21). |
Beta Was this translation helpful? Give feedback.
-
would be a nice change if the default /tps will now display the spark tps since it is more detailed. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
If you're wanting to use placeholders with the bundled spark version, update to build 105 of Paper 1.21, and install https://api.extendedclip.com/expansions/spark/. |
Beta Was this translation helpful? Give feedback.
-
Based on an internal discussion in August of 2022, we are now entering the planning stages to remove the Timings functionality from the Paper server software, and to include spark in the distribution instead. More information will be shared once plans have solidified, this issue is being opened now to begin public tracking of our plans.
Some of the reasons for this are as follows:
Tasks:
Beta Was this translation helpful? Give feedback.
All reactions