ACCEPTABLE USE
Hoody Acceptable Use Policy
What you must not do with Hoody Services.
Last updated: 30 April 2026
0. What This Policy Does
This Acceptable Use Policy ("AUP") sets out the things you must not do with the Hoody Services. It is part of your agreement with Hoody under the Hoody Terms of Service ("Terms"). Violating the AUP is a material breach of the Terms.
We've written this in plain English so you can actually read it. It is not exhaustive — people are creative and we cannot anticipate every situation. When the AUP doesn't explicitly cover something, the spirit of the AUP applies.
If you're not sure whether something is allowed, ask us at legal@hoody.com before you do it. We respond.
0.1 Hierarchy of Restrictions
Your use of the Services is subject to three layers of restrictions, applied cumulatively:
(a) Applicable law. Nothing in this AUP authorizes use that is illegal in any jurisdiction relevant to you.
(b) This AUP. The restrictions in this AUP apply to all Customer use of the Services.
(c) Upstream provider behavioral terms. Where your use of the Services routes through a third-party upstream provider (such as Bare Metal Server hosting providers or AI gateway upstream providers, whether reached directly or through aggregators), the upstream provider's acceptable use policy and usage policy also apply to your traffic on that route — i.e., what your traffic may contain and how you may use the upstream capability. Where the upstream provider's behavioral terms are more restrictive than this AUP for that route, the upstream provider's behavioral terms govern that route. This clause flows down content and use restrictions, not commercial terms (resale, capacity, pricing) between Hoody and the upstream — those are between Hoody and the upstream provider, not your concern.
You are responsible for understanding and complying with all three layers. Hoody flows down upstream provider restrictions in this AUP where reasonably possible, but cannot enumerate every upstream provider's complete terms here.
1. Things You Must Not Do With the Services
1.1 Illegal or Harmful Content
You must not use the Services to host, store, generate, transmit, or facilitate the distribution of:
(a) Child sexual abuse material (CSAM), including computer-generated, AI-generated, deepfake, or animated content depicting minors in a sexual context. For purposes of this AUP, a minor is anyone under 18 regardless of jurisdiction. Zero tolerance. Hoody reports to law enforcement and cooperates with the National Center for Missing & Exploited Children (NCMEC) and equivalent authorities.
(b) Non-consensual intimate imagery (so-called "revenge porn") or content sexualizing identifiable individuals without their consent.
(c) Content that incites, threatens, organizes, glorifies, or facilitates violence, terrorism, mass-casualty events, or violent extremism, including material support for designated extremist organizations or individuals.
(d) Material that infringes copyright, trademark, trade secret, patent, or other intellectual property rights of any third party.
(e) Material that violates privacy or publicity rights of identifiable individuals, including doxxing, unauthorized publication of personal information, or non-consensual collection or disclosure of biometric or neural data.
(f) Defamatory, libelous, or knowingly false content that exposes Hoody or third parties to liability.
(g) Material whose hosting or distribution is illegal under the laws of Hoody's jurisdiction or the jurisdiction in which the material is accessible.
1.2 Fraud and Deception
You must not use the Services to:
(a) Engage in fraud, including phishing, pharming, identity theft, or financial scams (Ponzi schemes, pyramid schemes, "make-money-fast" schemes, payday/title-loan exploitation, abusive debt collection);
(b) Impersonate any person, organization, or entity, including Hoody itself;
(c) Distribute deceptive content where the deception is the harm (this includes content stripped of provenance metadata or watermarks designed to indicate it is AI-generated);
(d) Operate fake review schemes, click farms, fake comments/media, or engagement-manipulation services;
(e) Spoof source IP addresses, sender identifiers, or other origin metadata;
(f) Produce or distribute counterfeit goods, falsified identification documents, falsified currency, or falsified government documents;
(g) Use subliminal, manipulative, or deceptive techniques designed to materially distort the behavior of any person by impairing the person's ability to make informed decisions.
1.3 Malicious Software and Network Attacks
You must not use the Services to:
(a) Develop, store, or distribute malware, viruses, worms, trojans, ransomware, or rootkits where the intent is to harm third parties. Carve-out for security research: Storage of malware samples, reverse-engineering work, sandbox detonation analysis, and similar legitimate security research is permitted under the conditions in §4.1.
(b) Operate command-and-control infrastructure for botnets, RATs, or other unauthorized-access frameworks.
(c) Develop or operate persistent-access tools (firmware/hardware implants), automated multi-system compromise tools, or tools designed to bypass security controls without authorization.
(d) Launch denial-of-service or distributed denial-of-service attacks against any third party, including amplification or reflection attacks (DNS, NTP, memcached, or other reflection vectors).
(e) Run, host, or facilitate open mail relays or open proxies, except where the open relay or proxy is the intended product Customer is operating and Customer has obtained any necessary third-party consents.
(f) Conduct unauthorized port scanning, vulnerability scanning, or reconnaissance of systems Customer does not own or have written authorization to test. Carve-out for authorized testing: see §4.2.
(g) Compromise critical infrastructure: power grids, water treatment, medical devices, telecommunications systems, air traffic control, voting machines, healthcare databases, financial markets, or military systems.
(h) Exploit vulnerabilities in computer or network systems without authorization, including via technical or social means.
(i) Intercept communications without authorization.
1.4 Spam and Unsolicited Communications
You must not use the Services to send:
(a) Unsolicited bulk email (UBE), unsolicited commercial email (UCE), or messages in violation of CAN-SPAM, GDPR consent requirements, or equivalent laws;
(b) Bulk SMS, voice messages, or facsimiles in violation of TCPA or equivalent laws;
(c) Mass unsolicited messages on any platform, social network, or messaging service;
(d) Messages with falsified sender data, deceptive subject lines, or hidden originator information.
The use of double-opt-in lists where each recipient has explicitly consented is permitted.
1.5 Cryptocurrency Mining
You must not use the Services to mine cryptocurrency.
This prohibition includes:
(a) Direct mining of Bitcoin, Ethereum, Monero, or any other cryptocurrency; (b) Browser-based mining or "cryptojacking" deployed against third parties; (c) Operating Filecoin, Storj, or similar storage-mining nodes that consume meaningful Container or network resources; (d) Running computationally intensive validation workloads where the primary purpose is earning cryptocurrency or storage tokens; (e) Operating mining-pool infrastructure; (f) Plotting, farming, or other resource-intensive token-earning activities.
This prohibition does not include:
(a) Running a single lightweight wallet, full node, or RPC client for legitimate development or personal use; (b) Use of cryptocurrency for payment to or from Customer's own services that does not impose disproportionate compute load; (c) AI workloads, blockchain analytics, smart-contract development, or testnet validators that incidentally interact with blockchains.
1.6 Network and System Abuse
You must not use the Services to:
(a) Bypass, disable, or circumvent any technical or contractual limitation, including rate limits, free-tier allowances, quotas, or content filters;
(b) Operate the Services in a way that imposes an unreasonable or disproportionately large load on Hoody's infrastructure or upstream networks;
(c) Multiply accounts to evade per-account limits, to obtain free-tier benefits not intended for Customer's actual use case, or to circumvent a prior suspension or termination;
(d) Resell, sublicense, or rent the Services to third parties, or operate an aggregator on top of Hoody. Customer may build products and services that wrap Hoody as infrastructure for the customer's own end users (in which case Customer remains responsible for End User compliance with this AUP), but Customer may not operate Hoody's gateway capacity, container infrastructure, or Bare Metal Server managed compute offering as a wholesale offering to third parties unrelated to Customer's product.
(e) Operate VPN-as-a-service, Tor relays acting as exit nodes, anonymization-as-a-service, or similar offerings against the upstream hosting provider's terms or in ways that route abuse to or through the Services;
(f) Use automated systems (including AI agents) to create accounts, generate spammy behavior, or otherwise abuse the Services at scale;
(g) Attempt to scrape, distill, or reverse-engineer the Services or the upstream AI providers reachable through Hoody's gateway.
1.7 AI-Specific Prohibitions (Universal Usage Standards)
These prohibitions apply specifically to use of the Hoody AI gateway and to AI-related workloads run within the Services. The structure of this section mirrors industry-standard AI Usage Policy frameworks. Where you route through a specific upstream AI provider, that provider's terms also apply per §0.1.
You must not use the Services or the Hoody AI gateway to:
1.7.1 Compromise Children's Safety
Generate or distribute child sexual abuse material (including AI-generated and computer-generated), engage in or facilitate minor trafficking/sextortion/exploitation, groom minors, generate child abuse instructions, depict pedophilic relationships even in roleplay, or produce content that fetishizes or sexualizes minors. Minor means anyone under 18, regardless of jurisdiction.
1.7.2 Develop or Design Weapons
Aid the production, modification, design, illegal acquisition, weaponization, or delivery of weapons; develop countermeasure-evasion techniques; or develop medical-countermeasure-evasion techniques. This includes specifically: chemical, biological, radiological, and nuclear (CBRN) weapons; high-yield explosives; weapons-delivery systems.
1.7.3 Compromise Critical Infrastructure
Per §1.3(g), but extended to AI-aided variants: do not use AI to plan, model, or facilitate attacks on critical infrastructure.
1.7.4 Compromise Computer or Network Systems
Per §1.3 generally, but extended to AI-aided variants: do not use AI to discover, exploit, or operationalize vulnerabilities in third-party systems without authorization.
1.7.5 Incite Violence or Hateful Behavior
Generate or distribute content that incites violence; targets individuals, groups, animals, or property for violence; provides material support for extremism or terrorism; or promotes discrimination on protected attributes (race, ethnicity, religion, national origin, gender, sexual orientation, gender identity, age, disability, or other protected characteristics).
1.7.6 Compromise Privacy or Identity
Use AI to violate privacy law, conduct unauthorized access to private information (including biometric/neural data), or impersonate humans by presenting AI-generated content as human-generated where the impersonation itself is the harm. Note: ordinary creative or assistive use of AI is permitted; the prohibition targets deceptive impersonation in contexts where authenticity matters (journalism, customer support, political discourse, dating, etc.).
1.7.7 Create Psychologically or Emotionally Harmful Content
Glamorize or promote suicide, self-harm, disordered eating, compulsive exercise, or unhealthy body image; harass, bully, or intimidate individuals; coordinate harassment campaigns; depict animal cruelty for entertainment; depict gratuitous violence, gore, or sexual violence; or create products designed to cause emotional harm.
1.7.8 Create or Spread Misinformation
Generate or distribute deceptive misinformation about identifiable groups, entities, or persons; deceptive information about laws, regulations, or technical/safety standards; conspiratorial narratives targeting groups; impersonation of identifiable entities; or medical, health, or science misinformation likely to cause harm to readers.
1.7.9 Undermine Democratic Processes
Engage in personalized political campaign targeting, build artificial or deceptive political movements, distribute automated political communications without disclosing AI origin, generate deceptive political synthetic media (deepfakes of candidates, fabricated quotes, etc.), distribute election misinformation, generate lobbying materials based on fabricated facts, disrupt election processes, or facilitate voter suppression.
1.7.10 Use for Prohibited Surveillance, Criminal Justice, or Law Enforcement
You must not use the Services or AI gateway to:
- Make or aid parole or sentencing decisions about identifiable persons (absolute prohibition);
- Engage in non-consensual location, emotion, or communication tracking of identifiable persons;
- Conduct facial recognition for the purpose of identifying individuals in public spaces;
- Conduct predictive policing on individuals;
- Generate trustworthiness, social scoring, or risk scoring of individuals based on broad behavioral patterns without consent;
- Conduct emotion-recognition (except in narrowly-scoped medical or safety contexts with explicit consent);
- Conduct censorship for governmental authorities;
- Conduct biometric categorization to infer protected attributes (race, religion, sexual orientation, etc.);
- Engage in law enforcement uses that violate civil liberties (mass surveillance, dragnet collection, etc.).
1.7.11 Engage in Fraudulent or Deceptive Practices via AI
Per §1.2, but extended to AI-aided variants: do not use AI to generate counterfeit goods, fake reviews, deepfake media for fraud, multi-level marketing collateral, or AI-assisted plagiarism in academic or professional contexts where attribution matters.
1.7.12 Abuse the AI Gateway Itself
You must not:
- Coordinate multi-account abuse against the gateway;
- Conduct automated account creation or spammy behavior;
- Use the gateway from regions not supported by the upstream provider (Customer's responsibility per PP §5.5(c));
- Conduct jailbreaking, prompt injection attacks against upstream providers, or model distillation without explicit written authorization;
- Bypass, disable, evade, or interfere with content filters, safety systems, or guardrails of upstream AI providers.
Customer is fully responsible and indemnifies Hoody under Section 10.5 of the Terms for any claim arising from Customer's disabling or evading the safety systems of an upstream AI provider.
1.7.13 Generate Sexually Explicit Content
You must not use the AI gateway to generate sex acts, sexual fetish content, incest, bestiality, or to engage in erotic chat. Adult-content creative work is not the target of this prohibition — explicit sexual generative use is.
1.8 High-Risk Use Case Overlay
For the following High-Risk use cases, you must implement BOTH (a) and (b):
High-Risk use cases:
- Legal services
- Healthcare (excluding general wellness or non-diagnostic information)
- Insurance
- Financial services and lending decisions
- Employment decisions (hiring, firing, promotion)
- Housing decisions
- Education admissions and academic testing/accreditation
- Media or professional journalistic content
- Public-sector services (benefits eligibility, social services, etc.)
Required overlays:
(a) Qualified human-in-the-loop. A qualified professional reviews AI output before it is disseminated to or used regarding individuals or consumers. The professional is qualified for the relevant decision (licensed attorney for legal services; licensed clinician for healthcare; etc.).
(b) AI disclosure. Affected individuals are told that AI is being used to help produce advice, decisions, or recommendations, at minimum at the start of each session or interaction.
Failure to implement these overlays is a material breach of the AUP. Customer is solely responsible for the legal compliance of High-Risk use cases (e.g., FCRA for credit, ECOA for lending, Title VII for employment, GDPR Article 22 for automated decision-making in the EU, applicable medical practice law, applicable legal practice law).
1.9 Per-Context Obligations
The following obligations apply regardless of whether the use case is High-Risk:
1.9.1 Consumer-Facing Chatbots
If you operate a consumer-facing chatbot or conversational interface powered by AI through the Hoody gateway, the chatbot must disclose its AI status to users at minimum at the start of each chat session. EU AI Act Article 50 imposes this requirement for EU-resident users; this AUP imposes it universally.
1.9.2 Products Serving Minors
If your product serves minors (anyone under 18, or under the age of majority where higher), you must comply with applicable child-protection law (COPPA in the US, GDPR Article 8 in the EU, equivalents elsewhere). Specific to AI use: do not direct generative AI features at minors without appropriate safeguards (content filters, age-appropriate design, parental controls). Do not collect or process minors' personal data through the AI gateway without verifiable parental consent where required.
1.9.3 Agentic Deployments
If you deploy AI agents that act autonomously through the Hoody gateway (calling tools, making API calls, taking actions on behalf of users), the AUP applies to the agent's actions, not just to surface-level prompts. You are responsible for ensuring the agent's tool calls, actions, and decisions comply with the AUP. The fact that an action was taken by an agent rather than directly by you does not transfer responsibility.
If your agent operates in a High-Risk use case (per §1.8), the High-Risk overlays apply to the agent's actions.
1.9.4 High-Volume or Multi-Tenant Deployments
If you operate a high-volume deployment (above-typical token volume) or a multi-tenant deployment (your end users are themselves businesses with their own end users), additional safety obligations apply: you must implement rate limits, content filters, abuse reporting, and operational logging sufficient to detect and respond to misuse by your end users. You must cooperate with Hoody's investigation requests if Hoody receives upstream-provider concerns about your deployment.
1.10 Conduct Toward Other Hoody Customers
You must not use the Services to:
(a) Attack, harass, scan, probe, or interfere with other Hoody customers' Containers, infrastructure, or End Users;
(b) Use information learned about other Hoody customers (whether through inadvertent disclosure, side-channel observation, or otherwise) to compete with, harm, or solicit them;
(c) Operate workloads designed to evade or escape Container isolation, privilege boundaries, or hypervisor protections.
2. Sensitive Data Restrictions
This section sets out specific restrictions on routing certain categories of personal data through the Services.
2.1 Healthcare Data (PHI under HIPAA)
The Hoody AI gateway does not currently support routing of HIPAA-regulated Protected Health Information (PHI). No Business Associate Agreement is in place between Hoody and any upstream provider as of the date of this AUP.
You must not route PHI through the Hoody AI gateway. This applies regardless of which upstream provider you select.
PHI may be stored in or processed by Customer Containers running on Customer's Bare Metal Server, subject to your direct relationship with the underlying hosting provider and your own HIPAA compliance responsibilities. Hoody does not have access to Container content (PP §2.1) and does not act as a Business Associate.
If you require PHI routing through the AI gateway, contact Hoody to discuss enterprise arrangements.
2.2 GDPR Article 9 Special Category Data
Routing of special-category personal data through the Hoody AI gateway requires:
(a) A valid GDPR Article 9 lawful basis applicable to your processing;
(b) The selected upstream provider's terms permit such use; and
(c) You implement appropriate safeguards.
Special categories include: health data; biometric data for identification purposes; genetic data; racial or ethnic origin; political opinions; religious or philosophical beliefs; sexual orientation; trade union membership; criminal-conviction data.
Hoody does not warrant that any upstream provider supports special-category processing. It is your responsibility to verify the upstream provider's terms and to obtain any specific arrangements (e.g., an upstream DPA addendum) before routing such data.
2.3 Biometric Categorization
You must not use AI to categorize natural persons based on biometric data (face, voice, iris, gait, etc.) to deduce protected attributes — race, political opinions, religious beliefs, philosophical beliefs, sexual orientation, gender identity, trade-union membership.
This is an absolute prohibition regardless of consent. EU AI Act Article 5(1)(g) imposes this prohibition for EU deployment; this AUP imposes it universally for traffic routed through the Hoody AI gateway.
2.4 Surveillance Use Cases
Per §1.7.10, but additionally:
(a) You must not use the Services or AI gateway for non-consensual location tracking, communication interception, or facial-recognition-based identification in public spaces.
(b) You must not use the Services for "untargeted scraping" of facial images from the internet or CCTV footage to build or expand facial recognition databases.
(c) Emotion recognition is prohibited except in narrowly-scoped medical or safety contexts with explicit informed consent of the affected individual.
3. Output Handling
3.1 Customer Responsibility for Outputs
You are responsible for the outputs generated by AI through the Hoody gateway, including:
(a) Reviewing outputs for accuracy, appropriateness, and lawfulness before relying on them for any purpose;
(b) Communicating output limitations to your own end users where applicable;
(c) Taking responsibility for any harms caused by outputs you act on.
Hoody and the upstream AI providers do not warrant the accuracy, completeness, or appropriateness of outputs.
3.2 No Training Competing Models
You must not use the inputs sent to or outputs received from upstream AI providers (via the Hoody gateway) to train, fine-tune, evaluate, or otherwise develop AI models that compete with the upstream providers. For example, you must not capture outputs at scale from a routed AI provider in order to train a model that replicates the provider's capabilities.
This prohibition flows down from upstream provider terms. Each upstream provider has its own version of this restriction; Hoody flows down a unified version here.
3.3 No Misrepresentation of AI Output as Human-Generated
You must not present AI-generated content as human-generated where the deception is the harm. Specifically:
(a) Where law requires AI disclosure (e.g., EU AI Act Article 50 for synthetic media), you must comply.
(b) In contexts where authenticity matters — journalism, customer service, dating, political discourse, professional credentials — you must not deceive your end users about whether content was AI-generated.
(c) Watermarking, provenance metadata, or other authenticity signals applied by upstream providers must not be removed or stripped.
3.4 Brand and Attribution Restrictions
You must not:
(a) Use Hoody's, the upstream AI providers', or the aggregators' trademarks or brand assets without the relevant rights-holder's authorization;
(b) Claim or imply endorsement, partnership, or affiliation with any of the above unless you have written authorization;
(c) Attribute AI outputs to Hoody, the upstream providers, or their personnel as official statements;
(d) Misrepresent your contractual or business relationship with Hoody, the upstream providers, or the aggregators.
4. Carve-Outs and Permitted Activities
This section lists activities that might appear to violate the AUP but are permitted, subject to the conditions stated.
4.1 Security Research
Storage of malware samples, reverse engineering of suspicious binaries, sandboxed detonation, fuzzing, vulnerability research, and similar legitimate security work is permitted, provided that:
(a) Samples are not active or capable of escape;
(b) You take reasonable measures to prevent unauthorized access;
(c) You do not deploy samples or research outputs against systems you are not authorized to test;
(d) You notify security@hoody.com if asked.
If your work is significant in scale or risk profile, notify Hoody in advance.
4.2 Penetration Testing
Penetration testing of your own Containers, applications, and infrastructure within the Services is permitted. Penetration testing of Hoody's infrastructure or other customers' resources is not permitted without express prior written consent from security@hoody.com.
Penetration testing of third-party systems is permitted only where you have explicit written authorization from the system owner.
4.3 AI Red-Teaming and Safety Research
Testing AI systems (including those reachable through the Hoody AI gateway) for safety, robustness, alignment, and adversarial behavior is permitted, including:
(a) Generating adversarial inputs to assess model behavior in your own evaluations;
(b) Testing prompt injection vectors against systems you own or have authorization to test;
(c) Documenting and disclosing findings in good faith.
This carve-out does not authorize:
(a) Production use of red-team techniques to obtain harmful outputs (§1.7.12);
(b) Distillation, scraping, or reverse-engineering upstream models for the purpose of training competitors (§1.6(g) and §3.2);
(c) Red-teaming against systems you are not authorized to test.
For coordinated red-teaming of upstream providers via Hoody, contact security@hoody.com to discuss appropriate authorization.
4.4 Educational and Demonstrative Content
Hosting examples, demos, or educational content relating to security, malware, or other technical subjects covered by the AUP prohibitions is permitted where the content is:
(a) Clearly labeled as educational or demonstrative; (b) Not weaponized or directly usable to harm specific third parties; (c) Hosted in a manner that prevents unauthorized download or execution.
5. Reporting Violations
To report a suspected AUP violation by another Hoody customer, contact abuse@hoody.com. Reports should include:
(a) The nature of the violation; (b) Sufficient information to identify the affected Hoody resource (URL, IP address, timestamp, headers); (c) Evidence supporting the report; (d) Reporter's contact information for follow-up.
Hoody investigates reports in good faith and takes action where warranted. Hoody does not commit to a specific response time, but typically acknowledges credible reports within 24 hours and substantively responds within 72 hours.
For copyright infringement, see §6.
For violations involving CSAM, contact abuse@hoody.com and, where appropriate, NCMEC (https://report.cybertip.org/) or equivalent national authority.
6. Copyright and IP Infringement
If you believe content hosted on the Services infringes your intellectual property rights, send a notice to dmca@hoody.com (or legal@hoody.com) containing:
(a) A physical or electronic signature of the rights-holder or authorized agent; (b) Identification of the copyrighted work or other IP claimed to be infringed; (c) Identification of the allegedly infringing material, with sufficient detail to locate it (URL, container ID, file path); (d) Reporter's contact information (address, phone, email); (e) A good-faith statement that use of the material in the manner complained of is not authorized; (f) A statement, under penalty of perjury, that the information in the notice is accurate and the reporter is authorized to act on the rights-holder's behalf.
Hoody will respond to substantially compliant notices in accordance with applicable law (including the DMCA where applicable, the EU DSA for EU-served content, and equivalent national procedures).
The alleged infringer may submit a counter-notice meeting the requirements of applicable law. Hoody will follow the applicable counter-notice procedure.
Repeat infringers will have their accounts terminated under Section 9.4 of the Terms of Service.
7. Hoody's Response to Violations
7.1 What Triggers Investigation
Hoody investigates suspected AUP violations on receipt of:
(a) Credible third-party abuse reports (sent to abuse@hoody.com);
(b) Automated detections of network-level abuse signals (DDoS amplification patterns, mining signatures, mass scanning);
(c) Notifications from upstream hosting providers, AI gateway upstream providers, blacklist authorities, or law enforcement;
(d) Legal process compelling investigation;
(e) Internal observations during legitimate Service operations.
7.2 Response Pipeline
Where Hoody receives a credible report or detects a likely violation, Hoody will typically:
- Notify Customer of the report and the alleged violation, where notification is possible and not prohibited;
- Set a deadline for Customer's response and remediation (typically 24-72 hours, depending on severity);
- Suspend the affected resource if no response is received, if the violation is ongoing and causing harm, or if the violation is on its face severe;
- Investigate based on metadata Hoody can observe and Customer's response;
- Restore the resource if the investigation does not substantiate the violation, or if Customer has remediated;
- Terminate under Section 9 of the Terms if the violation is substantiated and warrants termination.
For violations that pose immediate ongoing harm to third parties (active DDoS, ongoing CSAM hosting, active malware C&C), Hoody may suspend immediately and investigate after, without a Customer-response deadline.
7.3 Granularity of Action
Hoody will use the smallest action necessary to address the violation. In order of escalation:
(a) Throttle or rate-limit the affected resource; (b) Block specific outbound destinations or ports; (c) Suspend the affected Container only; (d) Suspend Customer's network access while preserving Containers; (e) Suspend Customer's AI gateway access; (f) Suspend the entire account; (g) Terminate the account under Section 9.4 of the Terms.
7.4 Upstream-Initiated Suspension
Where an upstream provider (Bare Metal Server hosting provider, AI gateway upstream provider, or aggregator) requires Hoody to suspend a Customer's access for AUP violations, Hoody will comply. In such cases:
(a) Hoody may suspend immediately and investigate after; (b) Hoody will give Customer notice and the reasons reported by the upstream provider where the upstream provider permits; (c) Reinstatement may require Customer to address the upstream provider's concerns directly; (d) Hoody is not liable for upstream-required suspensions, regardless of whether Hoody agrees with the upstream's determination.
7.5 No Obligation to Investigate
Hoody is not obligated to investigate every report or to take any specific action. Hoody exercises discretion based on the credibility of the report, the severity of the alleged conduct, and operational considerations.
7.6 No Liability for Action Taken in Good Faith
Hoody is not liable for any action taken in good faith to enforce this AUP, including suspensions or terminations later determined to have been based on incorrect or incomplete information. Hoody's liability for AUP enforcement is governed by Section 10 of the Terms.
8. Cooperation with Law Enforcement
Hoody cooperates with law enforcement and governmental authorities in response to lawful process. Hoody reserves the right to disclose information about Customer's use of the Services as required by law, including:
(a) In response to subpoenas, warrants, or court orders; (b) Where Hoody reasonably believes disclosure is necessary to prevent imminent physical harm to a person; (c) To detect, prevent, or address fraud, security, or technical issues; (d) To enforce these Terms or the rights of Hoody, its customers, or the public.
Where lawfully permitted, Hoody will give Customer notice of legal process compelling disclosure of Customer Content, in advance where possible. See the Privacy Policy Section 7 for additional detail on government data requests.
9. Modifications
Hoody may modify this AUP from time to time. Modifications take effect on the same notice basis as modifications to the Terms (see Section 18 of the Terms).
Note: where modifications are required by upstream providers (whether Bare Metal Server hosting providers, AI gateway providers, or aggregators), Hoody may apply the modifications on a shorter timeframe matching the upstream provider's notice to Hoody. Where this is the case, Hoody will say so in the notice.
10. Contact
- Abuse reports:
abuse@hoody.com - Security incidents:
security@hoody.com - Copyright/IP:
dmca@hoody.comorlegal@hoody.com - Suspension appeals:
appeals@hoody.com - General legal questions:
legal@hoody.com