CARE is built to connect reading and writing with NLP models. Whether you want live model suggestions during a task or to run models over many documents in advance, the platform supports both. Researchers can plug in their own models (e.g. sentiment, grading, or custom skills) via the NLP Broker and use them inside studies or for preprocessing.
During a study, participants can work with documents that already have or receive NLP output. For example, a sentiment model can attach labels to comments as they are written, so annotators see model feedback while they work. This supports human-in-the-loop setups: comparing human and model judgements, testing interfaces that show AI suggestions, or guiding annotators with model-based hints. The exact way results are shown (e.g. icons, side panels) can be tailored in the frontend to your study design.
For larger corpora or to avoid delays during the study, CARE lets administrators apply NLP skills to documents or submissions in bulk. You choose a skill (e.g. grading, assessment), map its inputs to your data (e.g. submission text, document content, or a configuration), select which files to process, and run. Processing runs in the background; you can monitor progress and cancel if needed. Results are stored and can be used later in the study (e.g. as pre-filled suggestions or as a baseline to compare with human annotations). Typical uses include pre-processing before a study starts, building ground truth or baseline data, and running the same pipeline on many files with one configuration.
CARE does not ship a fixed list of “NLP feature types.” Skills are whatever you connect to the NLP Broker (and that show as available in the dashboard). In practice you get real-time use (e.g. sentiment or feedback while annotating or editing), batch use (Apply Skills on documents or submissions), and in-study modals (e.g. a step that runs a skill and shows results in a modal). Task-wise, examples in the docs include sentiment, grading, edit comparison, and LLM feedback; other tasks (e.g. summarization) are possible depending on the skills you register.
The NLP Skills area in the dashboard lists connected skills and their status (online/offline). For each skill you can inspect inputs and outputs, view or copy the configuration, and use a message interface to send test inputs and check responses. That makes it easier to verify that a model behaves as expected before using it in a study or in batch preprocessing.
CARE’s NLP features support a range of workflows: natural text interaction and collaboration, structured annotation studies, and model evaluation. You can compare conditions with and without NLP on the same platform, pre-generate model outputs to compare with human annotations, or run revision and feedback studies (e.g. AI-generated feedback and user revisions) in a single environment. Hosting typically involves running the CARE server, the NLP Broker, and your model(s); the documentation describes setup and points to example model nodes and the broker repository so you can get from installation to a working NLP-integrated study quickly.