AI in Teamspective – FAQ

Last updated: January 19, 2026

AI in Teamspective – FAQ

Frequently asked questions and answers about AI in Teamspective

Does Teamspective use AI in its services?

Yes. Teamspective uses AI-powered features to enhance the user experience and help you take better actions, faster.

AI capabilities include text analysis, automatic summaries, suggestions, and insights.

These capabilities help leaders more easily understand engagement trends, gather feedback, and take action more effectively.


What AI model, technologies and service providers Teamspective uses?

Teamspective uses Large Language Models (LLMs) as the core AI technology within its platform. Specifically:

  • AI Models: GPT-4 and GPT-4o mini

  • Technology Provider: Microsoft Azure OpenAI, Anthropic

  • Service Location: All AI processing is handled via Microsoft Azure’s data centers located in Sweden, ensuring compliance with EU data residency and GDPR requirements.

  • Teamspective uses privately hosted LLMs that are deployed solely for Teamspective's use, not public environments.

While the models themselves are developed by OpenAI, Teamspective builds all prompts, use case logic, data inputs and controls in-house to ensure high-quality, secure, and context-aware AI features.


What features in Teamspective currently use AI?

AI is used across several parts of the platform, including but not limited to:

  • Engagement insights: Detects patterns and trends in employee survey results.

  • Action suggestions: Recommends next steps for managers and admins based on data.

  • Sentiment analysis: Used in analyzing open-text responses and comments for emotional tone and repeating topics.

  • Feedback summaries: AI summarizes personal feedback to highlight patterns and most valuable insights.

  • Feedback coach: AI reviews and suggests improvements while a user is writing feedback to another person, using Teamspective.

  • Leadership coach: AI uses all available Teamspective data (limited by person's role and permissions) to answer user's questions regarding team engagement, personal feedback, 1-1 discussion planning, team workshops etc.

We continuously evaluate and expand the use of AI to improve usability and insights.


Is AI used to make decisions about individuals or teams?

No. AI in Teamspective is designed to assist, not to decide. All outputs—such as summaries or recommendations—are informational and non-binding. Decisions remain in the hands of human users, typically managers or admins.


Does Teamspective use customer data to train its AI models?

No. Teamspective does not use your data to train AI models. Any processing of your content is solely for providing and improving the specific features you use. Customer data remains your property and is handled in accordance with our Terms of Service and Data Processing Agreement.

For this reason the AI models do not learn or adapt based on your specific feedback or usage patterns. However, Teamspective may adjust general prompts or configurations to improve the performance and relevance of AI responses system-wide.


What data is shared with AI models?

Only the specific content necessary to generate the requested output is sent to AI-powered processing. For example, when you use summarization or text analysis features, the relevant text (such as a feedback comment or survey response) is securely sent to the AI model. Teamspective does not send your name, other workspace data or unrelated content to AI services.


Does Teamspective send data to AI models every time I open a page?

No. Data is only sent to AI-powered services when you actively use features that require AI processing (e.g., generating summaries, analyzing feedback, suggesting actions).

The responses are stored and used again whenever possible. For example your feedback summary will remain the same if new data has not been collected.

Simply viewing pages, browsing content, or accessing dashboards does not automatically transmit data to AI models.


How does Teamspective ensure the privacy and security of data processed by AI features?

Teamspective follows industry-standard security practices, including:

  • Encryption in transit and at rest

  • Strict access controls and role-based permissions

  • Data minimization and purpose limitation principles

  • Annual security audits and vulnerability assessments

Where third-party AI services are used (e.g., language models for summarization), we require contractual assurances that data is not stored or used for model training.


Who can access AI-generated content within our organization?

Access to AI-generated summaries and recommendations follows role-based access control. For example, feedback summaries are only visible to the recipient (or optionally to their manager, depending on settings). Admins and team leads can access broader engagement insights and trends based on their permissions.


Can I opt out of AI-powered features?

If your organization prefers to disable certain AI-powered capabilities, please contact our support team. We can help you configure features to meet your compliance or data handling requirements.


Are AI-generated summaries 100% accurate?

AI-generated summaries aim to be helpful and informative, but they may occasionally simplify or misinterpret nuanced feedback. We encourage users to review the original inputs alongside AI outputs when making important decisions.


How does Teamspective ensure quality and consistency of AI features?

A combination of best practice methods are used to ensure AI meets the reliability and consistency standards we have set.

First, we have structured output which ensures the output comes out in correct format and has instructions on field level what it should respond.

Second, we have guardrails on Azure Foundry (where we host our models) to guarantee that the content is not harmful.

Third, output quality is monitored when creating or editing the AI prompts. We use LLMs to judge the outputs at scale, and do human testing to review outputs as part of the development process.


Do you use AI for emotion recognition or social scoring?

No. Teamspective does not use AI to evaluate individuals’ emotional states via biometrics, nor do we apply any form of social scoring. We focus on professional feedback and survey data provided by users.


Can AI-generated results be audited or traced?

Yes. We maintain logs of interactions with the AI system and provide audit trails that allow us to trace what data was used for any specific AI output. This is part of our commitment to transparency and accountability.


How is AI usage aligned with GDPR and the EU AI Act?

We fully comply with GDPR and closely follow the evolving guidance on the EU AI Act. Our use of AI is not classified as high-risk, and we adopt a “compliance-by-design” approach, including risk assessments, human oversight, and transparency.


Is Teamspective categorized as high risk in the EU AI Act?

No it is not.


Where can I learn more about Teamspective’s data handling practices?

For more details, please review:


Have questions?

If you have any further questions about how AI is used or how your data is processed, please contact our support team at support@teamspective.com.