Hardware requirements

You can install WProofreader SDK Server on-premise on:

  • A dedicated server

  • Virtual machine (you can enable VMWare virtualization when installing WebSpellChecker)*

  • Docker container

Or using one of the cloud service providers.

*AI-based languages for English, German and Spanish are not supported under VM VirtualBox.

Here are the minimum hardware requirements for the installation:

Items

Minimum value

Comments

HDD

~ 1.95 GB (Windows)

~ 2.15 GB (Linux)

~ 3.35 GB (Docker image)

Minimum 1.0 GB of disk space is required for the installation package, including application server (AppServer), spelling and grammar check libraries, dictionaries, and web components (JS and CSS files, localization files, web servers configs etc) :

Additionally,

~ 3.0 GB of disk space will be used for AI Engines (568 MB for AI-based English, 1155 MB for AI-based German, 1220 MB for AI-based Spanish).

~ 490 MB will be used for special features like English autocomplete suggestions.

Also, extra space can be required for the following components:

  • Up to 2 MB for all personal user dictionaries in .../UserDictionaries and up to 10 MB for global custom dictionaries in .../CustomDictionaries directory.

  • Up to 15 MB for the AppServer log files to be saved in the AppServer/Logs directory. Once a log file size reaches 10 MB, a new log file will be created.

  • Web Server logs, for example, access logs of Apache HTTP Server, can require a significant amount of disk space as it keeps the records of all served requests. It is your responsibility to monitor and control space consumption by access logs.

  • Up to 90 MB of memory may be required if you decide to enable detailed logging of one of the components. However, this type of logging is turned off by default.

  • Additional space for usage statistics logs may be required if you enable them. For details, refer to the Enabling collection of usage statistics in logs guide.

  • Minimal hardware requirements for the functioning of NER library have increased up to ~336MB for each model.

Around 3.2 GB of disk space should be allocated to a Docker image with an AI English and English autocomplete suggestions enabled.

Make sure you regularly verify the amount of disk space consumed by your logs and allocate more space if necessary.

RAM

2 GB

At least 2.0 GB of memory is required for spell check, grammar check engines, and cache functioning if you are referring to AppServer. The AI models themselves and libraries required for them consume around 1.5 GB.

Also, consider the following RAM requirements:

  • 50 MB for a spelling check engine (English language dictionary is actively used).

  • 135 MB for the English grammar check engine. For all enabled languages, you may need around 700 MB. You also need to consider memory usage for grammar checking if you are using all the languages where grammar checking is supported.

  • 10 MB for cache enabled for on-premise (for 10 000 suggestions). Based on the rough calculation, 1 misspelling and its suggestions equal 100 bytes. The more misspellings and suggestions to be added to the cache, the more RAM will be needed.

  • If only an AI-based engine for English is needed and used, it will consume around 1.5GB of memory initially on launch and then drop to around 700MB.

The numeric requirements in the list above may change depending on your custom environment setup and usage.

CPU

2 CPU cores

CPU with AVX2/AVX512* instructions support.

*AVX stands for Advanced Vector Extensions. Read morearrow-up-right.

Please note that these are minimum installation requirements which change and vary depending on:

  • number of end-users who will be using the functionality;

  • volume of text end-users need to proofread;

  • type of language as well as the physical size of dictionaries;

  • percentage of text errors.

If you want to use AWS EC2 instance, you can choose a smaller instance such as t3.medium or t3a.medium arrow-up-rightwith 2 CPUs supporting AVX2/AVX512 instructions, 4 GB of memory, or a server with similar characteristics. We host and maintain the cloud product version on Amazon Web Services (AWS) and use a set of m5.xlarge EC2arrow-up-right instances under the load balancer for our application’s workload which is up to 70,000 requests per minute.

If you expect a high load on AI-based languages, you may consider using instances with NVIDIA T4 Tensor Core GPUs for batching processing (e.g. g4dn.xlargearrow-up-right EC2 instances).

If you don’t have dedicated hardware, you can try using the cloud service we provide. No software installation and configuration is required and you can migrate to your own server any time later.

Last updated

Was this helpful?