From b84e8aaee9d01583aad317d4bab0bd372afbc3ca Mon Sep 17 00:00:00 2001 From: Thomas Lamprecht Date: Wed, 7 Apr 2021 17:12:01 +0200 Subject: [PATCH] server: rest: switch from fastest to default deflate compression level I made some comparision with bombardier[0], the one listed here are 30s looped requests with two concurrent clients: [ static download of ext-all.js ]: lvl avg / stdev / max none 1.98 MiB 100 % 5.17ms / 1.30ms / 32.38ms fastest 813.14 KiB 42 % 20.53ms / 2.85ms / 58.71ms default 626.35 KiB 30 % 39.70ms / 3.98ms / 85.47ms [ deterministic (pre-defined data), but real API call ]: lvl avg / stdev / max none 129.09 KiB 100 % 2.70ms / 471.58us / 26.93ms fastest 42.12 KiB 33 % 3.47ms / 606.46us / 32.42ms default 34.82 KiB 27 % 4.28ms / 737.99us / 33.75ms The reduction is quite better with default, but it's also slower, but only when testing over unconstrained network. For real world scenarios where compression actually matters, e.g., when using a spotty train connection, we will be faster again with better compression. A GPRS limited connection (Firefox developer console) requires the following load (until the DOMContentLoaded event triggered) times: lvl t x faster none 9m 18.6s x 1.0 fastest 3m 20.0s x 2.8 default 2m 30.0s x 3.7 So for worst case using sligthly more CPU time on the server has a tremendous effect on the client load time. Using a more realistical example and limiting for "Good 2G" gives: none 1m 1.8s x 1.0 fastest 22.6s x 2.7 default 16.6s x 3.7 16s is somewhat OK, >1m just isn't... So, use default level to ensure we get bearable load times on clients, and if we want to improve transmission size AND speed then we could always use a in-memory cache, only a few MiB would be required for the compressable static files we server. Signed-off-by: Thomas Lamprecht --- src/server/rest.rs | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/server/rest.rs b/src/server/rest.rs index 07460125..7922897c 100644 --- a/src/server/rest.rs +++ b/src/server/rest.rs @@ -443,7 +443,7 @@ pub async fn handle_api_request { - let mut enc = DeflateEncoder::with_quality(data, Level::Fastest); + let mut enc = DeflateEncoder::with_quality(data, Level::Default); enc.compress_vec(&mut file, CHUNK_SIZE_LIMIT as usize).await?; let mut response = Response::new(enc.into_inner().into()); response.headers_mut().insert( @@ -607,7 +607,7 @@ async fn chuncked_static_file_download( ); Body::wrap_stream(DeflateEncoder::with_quality( AsyncReaderStream::new(file), - Level::Fastest, + Level::Default, )) } None => Body::wrap_stream(AsyncReaderStream::new(file)),