As other loaders do, if any file is bigger than 100Mib, its associated content is not ingested in the swh archive.
Implementation wise (storage related), this ends up referenced in the content_missing table (with associated hashes).
For the deposit use case, deposits matching that criteria should end up rejected (with an explicit 'reason').
Implementation wise, the functional checks could walk the deposit's associated archive and ensure the files respect this limit.
If not reject as usual the deposit.
Make sure that a deposit's (injected in the archive) associated software source code is cookable (that and T1714 :)