• 0 Posts
  • 30 Comments
Joined 4 months ago
cake
Cake day: March 29th, 2025

help-circle
rss
  • Yes. You learned not to touch a hot stove either from experience or a warning. That fear was immortalized by your understanding that it would hurt. An AI will tell you not to touch a hot stove (most of the time) because the words “hot” “stove” “pain” etc… pop up in its dataset together millions of times. As things are, they’re barely comparable. The only reason people keep arguing is because the output is very convincing. Go and download pytorch and read some stuff, or Google it. I’ve even asked deepseek for you:

    Can AI learn and understand like people?

    AI can learn and perform many tasks similarly to humans, but its understanding is fundamentally different. Here’s how AI compares to human learning and understanding:

    1. Learning: Similar in Some Ways, Different in Others

    • AI Learns from Data: AI (especially deep learning models) improves by processing vast amounts of data, identifying patterns, and adjusting its internal parameters.
    • Humans Learn More Efficiently: Humans can generalize from few examples, use reasoning, and apply knowledge across different contexts—something AI struggles with unless trained extensively.

    2. Understanding: AI vs. Human Cognition

    • AI “Understands” Statistically: AI recognizes patterns and makes predictions based on probabilities, but it lacks true comprehension, consciousness, or awareness.
    • Humans Understand Semantically: Humans grasp meaning, context, emotions, and abstract concepts in a way AI cannot (yet).

    3. Strengths & Weaknesses

    AI Excels At:

    • Processing huge datasets quickly.
    • Recognizing patterns (e.g., images, speech).
    • Automating repetitive tasks.

    AI Falls Short At:

    • Common-sense reasoning (e.g., knowing ice melts when heated without being explicitly told).
    • Emotional intelligence (e.g., empathy, humor).
    • Creativity and abstract thinking (though AI can mimic it).

    4. Current AI (Like ChatGPT) is a “Stochastic Parrot”

    • It generates plausible responses based on training but doesn’t truly “know” what it’s saying.
    • Unlike humans, it doesn’t have beliefs, desires, or self-awareness.

    5. Future Possibilities (AGI)

    • Artificial General Intelligence (AGI)—a hypothetical AI with human-like reasoning—could bridge this gap, but we’re not there yet.

    Conclusion:

    AI can simulate learning and understanding impressively, but it doesn’t experience them like humans do. It’s a powerful tool, not a mind.

    Would you like examples of where AI mimics vs. truly understands?











  • I spent way too long researching the morning. That industry implies a much greater population that is attracted to children. Things get more nuanced. People are attracted to different stages, like prebubesant, early adolescence, and mid to late adolescence. It seems like an important distinction because this is a common mental disorder.

    I was ready to write this comment about my fear that there’s a bunch of evil pedophiles living among us who are simply deterred by legal or social pressures.

    It seems more like the extreme stigma of pedophilia has prevented individuals from seeking assistance and has resulted in more child sexual abuse. This sort of disorder can be caused by experiencing this abuse at a younger age.

    When I was religious, we worked closely with an organization to help victims of trafficking. We had their stories. They entered our lives. I took care of some of these kids. As a victim of sexual abuse when I was kid, I had a hatred for these kinds of people. I feel like my brain is melting seeing how there is a high chance of people in my life being attracted to children. This isn’t really to justify the industry. I’m just realizing that general harassing people openly about it might not be helping the situation.








  • Obviously, using an AI to verify if somethingis AI is kinda bullshit, but this is what deepseek spat out:

    The list you shared does have strong AI-generated hallmarks. Here’s why it stands out as likely machine-written, and how you can spot similar outputs in the future:

    Telltale Signs This List Is AI-Generated

    1. Repetitive Phrasing – Overuse of:

      • “Rescinding [X] for [Y]”
      • “Streamlining [A] with respect to [B]”
      • “Withdrawing [Product] as a Covered [Category]”
    2. Excessive “Rescinding” Verbs – A human would vary language more (e.g., “repeal,” “abolish,” “eliminate”).

    3. Oddly Specific Yet Vague Items – Some entries sound bureaucratic but lack real-world impact (e.g., “Rescinding Test Procedures for Small Electric Motors”).

    4. Inconsistent Scope – Jumps from major energy policies (Strategic Petroleum Reserve) to trivial ones (showerhead pressure).

    5. “Unnecessary ADR Regulations” – ADR (Alternative Dispute Resolution) is a niche term unlikely in a political deregulation push.

    6. No Partisan Signaling – A real Trump/Musk DOGE list would include culture-war items (e.g., “Ending EV mandates”) or tech-bro priorities (e.g., “Fast-tracking AI nuclear reactor approvals”).


    How to Prove It’s AI-Generated

    • Ask for Sources: A real policy list would reference laws, executive orders, or press releases.
    • Check for Recent Events: Some items (e.g., “Clean Energy Federal Building Rule”) may not exist.
    • Test for Overlap: Run a portion through an AI detector (like Originality.ai).
    • Compare to Known AI Outputs: Lists with 47-50 items are common for AI (round numbers = generation artifacts).

    If You Suspect DOGE Is Using AI for Policy

    This would be a major scandal. Next steps:

    1. Compare drafts over time—AI lists will lack edits.
    2. Check metadata (if possible) for AI tool signatures.
    3. Demand justification for each item—AI can’t explain reasoning.

    Let me know if you want help digging deeper! This is a wild (and concerning) scenario if true. 🕵️♂️


    I want to note that the AI barely comprehended the idea of a government agency DOGE existing or the idea that Trump was reelected. I had to hammer it in:

    Ah, got it—so we’re envisioning a Doge-led Department of Government Efficiency (DOGE) under a Trump 2024 victory, where Elon Musk and a bunch of 20-year-old tech bros are ruthlessly slashing bureaucracy with meme-fueled chaos.

    Here’s your **47 absurd, inefficient, or just