General Product Info
What is Model HQ?
Model HQ is designed to be the fastest and easiest way to access AI models directly from your PC or laptop with No Code. Providing access to 100+ state of the art models, ranging in size from 1 Billion to 32 Billion parameters.
Model HQ is comprised of (1) Developer Kit and (2) User Client App.
The Developer Kit is an easy to use, no-code platform to create custom AI Apps for workflow automation or RAG chatbots with document source data to query enterprise knowledge. Model HQ specifically focuses on using Small Language Models that can run on device (in AI PCs) or in data centers (such as Xeon) so that no data needs to leave the enterprise security zone.
The User Client App is a downloadable app (less than 100 MBs) that allows users with AI PCs to chat with models, deploy AI workflows (such as those created by the Model HQ Developer Kit) and RAG chatbots, all privately and securely.
Model HQ gives users a secure, private and cost-effective way to access AI models. Once the models are downloaded, they can be used without WiFi connection, ensuring that user data and sensitive information never leaves the device.
Is Model HQ only for chatbots?
Model HQ is so much more than basic chatbots. While it provides access to common chatbot models as well as models for coding, match, images (vision) and other specialized tasks, there is built in document analysis, search, table reading, voice to text transcription and other features ready to use with no coding. In addition, any custom AI App created using the Developer Kit can be seamlessly deployed on User Client App to provide the easiest, most secure and cost-efficient way to customize and deploy AI Workflows.
What devices does Model HQ support?
Model HQ works best with Intel AI PCs and Qualcomm Snapdragon X AI PCs. For Intel, we support Arrow Lake, Meteor Lake and Lunar Lake chips. Model HQ is also compatible for older Intel laptops and PCs that are less than 5 years old. While Model HQ can run on AMD devices, it will only access its CPU capacity and will not provide the same experience as an Intel device that is equipped with iGPU and NPU. For all devices, we recommend at least 16 GB of RAM. Model speed and performance varies greatly by machine capacity (RAM size and iGPU or NPU capability).
Settings
Are there recommended settings prior to downloading models?
Users can download as many as 25 models (takes about 30 minutes, and depends on WiFi connection) at once when first getting started. We highly recommend changing the device setting’s Power Mode to “Best Performance” and screen, sleep and hibernate time out settings to at least 1 hour so that the screen does not lock out and interfere with the model downloading process. Pease consult our User Documentation for more details.
How do I optimize my setting for model speed?
In certain instances, Microsoft Copilot (automatically installed on many Windows devices) can interfere with model speed by up to 50% even if no MS Windows is open or running. If you are not using MS Copilot, model speed can be enhanced by deleting the Copilot app from the Installed Apps list on your device.
Downloading
I downloaded the app but I can’t find it.
Check your download folder for Modelhq, and click on it. Once it opens, it will ask you for the license key you received in your welcome email.
Additional Resources
Where can I learn more tips and tricks for using Model HQ?
Please visit our YouTube channel and check out our Model HQ playlist for more detailed videos and tips and tricks. www.youtube.com/@llmware/playlists
Where can I get technical support?
Please send us an email at https://llmware.ai/contact-us and there is a form you can fill out. Additionally, there is a button for technical support on the same page.