1 pointby matwiemann8 hours ago1 comment
  • matwiemann8 hours ago
    I built Skillgrade (https://www.skillgrade.ai/) to try and solve a problem I keep seeing on professional networks: everyone is suddenly an "AI Expert," but there’s no real baseline for what that actually means.

    Being an AI power user looks completely different for a lawyer using Harvey AI for document analysis versus a marketer using Midjourney, or an engineer using OpenClaw.

    I wanted to create a standardized, quick way to assess and showcase these practical skills.

    How it works: Contextual: It first asks for your industry and role (e.g., Legal, Executive, Engineering).

    Tool & Usage Based: It assesses which specific tools you use (ChatGPT, Claude, specialized industry tools) and how you apply them (e.g., basic email drafting vs. core workflow integration).

    The Output: It calculates a grade based on your inputs and generates a specific tier (like "Level 3: Orchestrator") and a certificate optimized for sharing on LinkedIn.

    The Tech / Challenges: Right now, the assessment relies heavily on self-reporting. One of the biggest challenges I’m thinking through is how to make the validation more rigorous without turning a 3-minute quick test into a 30-minute exam.

    I’d love the community's feedback on a few things: The Grading Logic: Does the tiering system feel accurate for your specific profession?

    Missing Tools/Use Cases: Are there major AI tools or specific workflows in your industry that the questionnaire completely misses?

    Future Validation: Any ideas on how to seamlessly integrate actual proof of work or skill testing without killing the conversion rate?

    Happy to answer any questions and would love to hear your thoughts!