I’m sorry that you’re using an architecture that makes moderately difficult things easier, but that makes easy things effectively impossible.
As a thought experiment, if you take the top 20 stats about the game the you wished you had, how hard would it be to express them as SQL queries or using R/etc, assuming that the game’s current state and/or history was stored in a queryable form? It’s hard to overestimate the value of being able to perform ad-hoc queries and analysis over your data.
If the current architecture is really causing so many problems, have you considered post-processing the event data stored in the “database” (actually serialized opaque event blobs stored in an append-only row storage system, based on previous dev comments) to extract a subset of data into an easily queryable form? That might give you the flexibility to experiment with ad hoc queries and analysis at a very low incremental cost.
Having no easy way to query/introspect on the game’s state and history means that abuse detection is effectively impossible, which I assume you’ve already noticed given how much abuse has been ignored.
I’m amazed and perplexed that a game with a few dozen users can generate so much data as to make logging infeasible. If there were a few tens of millions of simultaneous users, maybe. How many terabytes per day of data is the game generating, and why? The costs for running the game must be very high–I’d have expected a game this small and with so few users would run just fine on a Raspberry Pi or equivalent. Are there specific gameplay patterns that players are using that are causing problems so far beyond what was apparently expected? Will any of the planned features on the roadmap improve the situation?