How to choose a compression level - quality vs file size, with examples
Last reviewed 2026-05-04. A practical answer to one question on Compress Image: which compression level should you pick? The slider trades file size against visible quality - the defaults work for most photos, but if you have a target size (an email attachment cap, a CMS upload limit, a "compress image to 100 KB" intent) you can pick the level deliberately and stop at the first one that meets the cap. The guide names four anchor levels - 50, 70, 85, 100 - with rough output-size ranges and visible-quality descriptions, and tells you what to do when the slider runs out.
Anchor 1 - level 100 (maximum quality): keep when storage is not the issue
Level 100 (the highest the slider goes) means the encoder applies essentially no quality reduction. For JPEG, this is "encode at maximum quality" - the output is still smaller than the input because JPEG's own format overhead is removed, but visually the result is indistinguishable from the source. For PNG, the level controls compression effort, not pixel data - the output is bit-for-bit equivalent to the input visually. For WebP in lossy mode, level 100 is "near-lossless" - the output is visibly identical at normal viewing distance.
- When level 100 wins: the photo is going to print (where the printer reveals every artifact), the photo is the source for further editing (compressing first then editing compounds quality loss), or you just want a smaller file without trading anything for it.
- Output size, rough range: a 12-megapixel JPEG photo at ~6-8 MB on input typically lands at ~4-6 MB at level 100 - smaller than the input but only because of format overhead removal, not because of quality reduction.
- What you trade: nothing visible. You trade only the file-size savings you would have gotten at lower levels.
Anchor 2 - level 85 (the "indistinguishable at normal viewing" sweet spot): the default for most photos
Level 85 is the level that most photo-storage applications and CMSes use as their default - and the level the Compress Image tool defaults to for the same reason. At level 85 the output is visibly identical to the source at normal viewing distance (1 metre away on a 27-inch monitor, or arm's-length on a phone). Pixel-peeping at 200% zoom reveals very minor smearing in flat color regions, but the artifacts are not visible at the size and distance you actually look at the photo.
- When level 85 wins: the photo is for the web (a blog post, a product page, a social-media upload), the photo is for an email attachment to a non-pixel-peeping recipient, or the photo is for personal storage on a service that has a per-file size cap (most cloud-photo services).
- Output size, rough range: the same 12-megapixel JPEG photo at ~6-8 MB on input typically lands at ~1.5-2.5 MB at level 85 - a 60-75% reduction with no visible quality loss for most viewers.
- What you trade: ~15% of the file size you would get at level 70, in exchange for visibly identical quality.
Anchor 3 - level 70 (visible compression on close inspection, but acceptable): when the size cap is real
Level 70 is the level where the slider starts to trade visible quality for file-size savings. At level 70 the artifacts become noticeable on close inspection: smearing in flat areas (a clear blue sky, a beige wall), halos around high-contrast edges (a black hairline against a white background), and blockiness in detailed textures (foliage, fabric weave, skin texture). At normal viewing distance these artifacts are still subtle - most readers will not notice unless they pixel-peep. For thumbnails, social-media-sized images, and use cases where the file-size cap is the binding constraint, level 70 is the right call.
- When level 70 wins: you have a strict file-size target (a CMS that rejects uploads over 1 MB; an email server that rejects attachments over 2 MB; a "compress image to 200 KB" or "compress image to 500 KB" intent that level 85 does not meet), the photo is going to be displayed at a small size (a thumbnail, a profile picture, a list-view image), or you are bulk-compressing a folder of photos where the total-size budget matters more than per-photo perfection.
- Output size, rough range: the same 12-megapixel JPEG at ~6-8 MB lands at ~600 KB to 1.2 MB at level 70 - an 80-90% reduction, with quality loss visible only on pixel inspection.
- What you trade: visible artifacts on close inspection in exchange for a third of the file size at level 85.
Anchor 4 - level 50 and the binary-search procedure for a hard size cap
Level 50 (and below) is the level where compression artifacts become visible at normal viewing distance, not just on pixel-peep. Smearing, halos, and blockiness are obvious - especially in photos with skin tones, detailed textures, or smooth gradients. Level 50 is rarely the right call for a photo unless the file-size cap is so strict that a lower-quality-but-visible result is better than no upload at all. Output range: the same 12-megapixel JPEG lands at ~200-400 KB at level 50 - over 95% reduction, with visible quality loss at normal viewing distance.
The binary-search procedure for a hard target. If you have a target ("compress image to 100 KB", "must fit under 200 KB for the CMS", "1 MB email cap"), the deliberate path is: start at level 85 (the default), download the result, check the size against your cap. If the result is over the cap, drop to 70 and repeat. If still over, drop to 50. If under the cap, raise to 100 for maximum quality within the budget. Three iterations almost always hit a level that meets the cap - and the input photo is never touched, so each iteration starts from the same source. Two adjustments make the search faster: if the input is at very high pixel dimensions (a phone-camera 4032 x 3024 image at 12 MP, for instance) and your use case does not need that resolution (a web display at 800 x 600, a social upload that downscales anyway), resize the image first at high quality, then compress at level 85 on the smaller image - the result hits a much smaller cap with much better visible quality. If the input format is wrong for the use case (a PNG screenshot used for a web photo, where JPEG would be better), convert to JPG first, then compress.
Format and resize fallbacks: when the slider is the wrong knob
The compression level interacts with the file format. For photos, JPEG and lossy WebP both give roughly the same level-vs-size curve - level 85 in WebP produces a file ~25% smaller than level 85 JPEG, with comparable visible quality. For graphics with sharp edges (logos, screenshots, diagrams, text), PNG is the right format - the lossless mode means the level controls file-size reduction effort only, never quality. JPEG at any level on a logo or screenshot introduces halos around the text and degrades sharp edges; the right answer is "use PNG, do not use JPEG", not "use JPEG at level 100". The companion guides JPG vs PNG for web and HEIC vs JPG vs WebP cover the format-choice decision; this guide is the level-choice decision once the format is settled.
If level 50 still does not meet your cap, the slider has run out - the next move is reducing pixel dimensions (Resize Image) or trying a different format (JPG to PNG for a graphic; HEIC to JPG for an iPhone photo where the source is unexpectedly large). For a photo, the resize-then-compress order matters: resize first at 100% quality, then compress the resized image at level 70-85 - the result almost always beats compressing the original at level 50 in both file size and visible quality. If level 100 still produces visible artifacts (banding in a gradient, posterization in a sky, color shifts), the issue is not the slider but the input - the input may already be a re-compressed file (each compression adds new artifacts on top of the old; this is generation loss) or in a colour space the encoder converts. The companion guide Compressed JPG looks blurry - three causes walks through that diagnostic flow. For everything else - one cap, one format, the right level - the binary-search path above lands the answer in three iterations on the slider.
Why trust these tools
- Ten-plus years of web tooling. The freetoolonline editorial team has shipped browser-based utilities since 2015. The goal has never changed: get you to a working output fast, without an install.
- Truly in-browser - no upload. Every file-processing tool on this site runs in your browser through modern Web APIs (File, FileReader, Canvas, Web Audio, WebGL, Web Workers). Your photo, PDF, audio, or text never leaves your device.
- No tracking during tool use. Analytics ends at the page view. The actual input you paste, drop, or capture is never sent to any server and never written to any log.
- Open-source core components. The processing engines underneath (libheif, libde265, pdf-lib, terser, clean-css, ffmpeg.wasm, and others) are public and audit-able. We link to each one in its tool page's footer.
- Free, with or without ads. All tools are fully functional without sign-up. The Disable Ads button in the header is always available if you need a distraction-free run.