today's lesson is...
no matter how many graphs i make
folks are more comfortable with new ideas and change when they can observe it on their system.
that reminds me - i should put the workload group max grant percent in there, too.
no matter how many graphs i make
folks are more comfortable with new ideas and change when they can observe it on their system.
that reminds me - i should put the workload group max grant percent in there, too.
for some added confidence, let's say i'm proposing a change from the default 25% max grant, to 6%. Can check dm_exec_query_stats for queries that used more of grantable memory like this.
(T-SQL query text in alt text. assumes default resource pool and "large" semaphore).
(T-SQL query text in alt text. assumes default resource pool and "large" semaphore).
For these purposes i'm mulling over using max_* vs last_* from dm_exec_query_stats. It'd be just the max from the duration the plan_handle has been cached. Yeah, i think i'll go with max_* vs last_*. Although for this workload on this system it won't be much different.
For these purposes, Query Store isn't as exciting as dm_exec_query_stats or dm_exec_query_memory_grants. query_store_runtime_stats gives max_query_max_used_memory. but no direct indication of grant or ideal. no denotation of group_id or pool_id.
this graph tells me a story of 20 to 100 concurrrent queries using up to 20% of the grantable memory in the default pool (and briefly a scant bit more). A strong hint no query in that interval needed a 25% grant.
(everything is in the default or internal pool in this instance)
(everything is in the default or internal pool in this instance)
And this graph tells me that up to 150 concurrent queries were waiting on free grantable memory in order to get their grant - waiting for a query to complete so they can start.
the red boxes show times when a grant request near 25% was next candidate.
the red boxes show times when a grant request near 25% was next candidate.
because of the requirement for 50% of the grant size to remain free grantable memory after the grant, a 25% grant request requires at least 37.5% of grantable memory to be free.
and this graph says to me: c'mon already, let's convert some of these resource_semaphore waits to running queries since i've got plenty CPU left to use.
what about tempdb? sure, it doesn't look like we'll be short-changing queries. but who knows when some weird "memory fraction" math will take over and a previous spill spills even more?
in short, i'm not too worried.
in short, i'm not too worried.