• WoodScientist@lemmy.world
        link
        fedilink
        English
        arrow-up
        28
        arrow-down
        1
        ·
        2 days ago

        Won’t someone think of the poor working class furries? A fur suit can already cost 10 or 20 grand. Now they’re going to have to add cyborg body parts to the mix as well? Talk about gentrification!

        • LousyCornMuffins@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          wait what. i made one for 200 bucks that the theater program i made it for is still using 30 years later (i build shit to last). you mean i could make money doing that?

          • WoodScientist@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 days ago

            I’m not in that community myself, but from what I hear a high quality for suit can cost $10-20k. Not sure what differences in construction may exist between those and what you made. But some artists definitely make decent livings making them.

            • anomnom@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 days ago

              Cooling systems, animatronics, durability.

              I’m not part of that community, but I’ve built props for advertisements and museum installations; durability and reliability are where most of the money is, especially in low volume products.

            • Rai@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 days ago

              Mmhmm. A nice head is 2kUSD or more. A whole custom suit starts at maybe 10, 15k, and can easily exceed 20.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        English
        arrow-up
        18
        ·
        2 days ago

        Wait, I … could make human-cat hears that actually have microphones in them and that turn towards the gaze and they could actually ear out of them if they’re wearing linked earphones … could even do some psychoacoustic filtering to give some kind of “superearing” ability … Then some cunt like Elon is going to just steal my idea… meh. … eff it

        • Redex@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          2 days ago

          Damn, this is totally gonna be a thing, and I’m all for it. We shall all become cute cat girls.

          • interdimensionalmeme@lemmy.ml
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            2
            ·
            edit-2
            2 days ago

            Why do I keep doing this

            Use a combination of different size microphones diaphragms and not just MEMS devices, Use the earshape parabolic concentration points for finding the optimal location and orientation of diaphragms include ultrasonic transceivers for out of band perception (also echolocation, room mapping, darksight) also to allow covert/out of band ultrasound communication with other catgirls that normies cannot hear (that means connecting transceivers AS transceivers, with US band amplifier for transmission) Using servogalvos to modify ear shape for acoustic focusing, if ear direction is using a gear drive, use double helical gear and a low acoustic profile dampening coating on the gears (acoustic anti-reflection) because we are making a listening device and we don’t want to listen to gear noises ! Consider ultrasonic motors instead of geared drive. Software wise, using direct wifi and ble for communication (DECT sucks, proprietary standard sucks) this should work with a raspberry pi and only open source software out of the box, no drivers needed, yes that means multichannel transmission and do all the processing off device actual software, we’re using the various microphones as an asynchronous, heterogenous phased array, unlike regular phased arrays which have many identifical receivers along a straight plane here we are going to skip a couple technological steps and use “receiver diversity” in the phased array design, so not only are the receivers in different places NOT on a plane, they are also receivers with different acoustic profiles (different sensibility of different frequencies) and anisotropic (due to ear “lens-shape”) reception patterns, so all of that diversity has to be correlated between all receivers to create an acoustic map of the world, so that we can decide also what sounds are psychoacoustically relevant and point the ears in that direction, listen if the received signal is actually signal or noise and then send that to the user, in realtime (max processing time is 20ms top end) make sure ear shape flexibility is enough to receive and focus some signal backwards… there’s still a few things to consider but I think that’s mostly it, for the earbud, use close loop ear canal acoustic feedback for the amplifier drive, use only open standards and open firmware in all devices. System to use myoelectic ear movement as control input feedback, at least with these two modes, ear movement to manual steering of the acoustic phased array, and the other ear movement to select auditory focus of signal sources (that is, automatic switching between detected relevant auditory sources). Add RED+IR laser pointer to the ears that can be used as a pointed, to indicate (covertly for IR) which auditory source is currently select and in what direction this signal currently use (ensure positional live tracking is reflected in beam direction). Use sub-vocal command interface for higher function control of the listening device. Provide standardized pattern and calibration equipment definition for microphone correlation calibration, use only open hardware assemblages. Include live transcription to text of all received text, ensure at least capacity for 12 simultaneous audio to text transcription channel, include live audio MP3QR decoding and all other digital audio transmission standard live decoding and transcription as part of the transcription channel capability. Include text summary ability, include voice signature automatic identification and tagging of decoded text stream, at least 4 terabyte storage space, maximum head weight 150 gram excluding battery, battery system must have 1 minute with no battery operation time or dual battery swapping system. include hardware audio encoder and decoder supporting all current ffmpeg codecs. Include acoustic counter battery targetting receiver system. Oh almost forgot, dual laser system mounted with microprism(for aimed laser microphone ability, to listen sounds through walls or from vibrating surfaces)

            Ok Machine, convert this schizo-ing document into Ratheon Cybernetic Auditory Telemetry & Echolocation with Anisotropic Reception (CAT&EAR) engineering requirement draft document

            • interdimensionalmeme@lemmy.ml
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              2
              ·
              edit-2
              2 days ago

              🛰️ Raytheon CAT&EAR System

              Cybernetic Auditory Telemetry & Echolocation with Anisotropic Reception

              Engineering Requirements Draft — v1.0

              📑 Document Overview

              This document outlines the full set of engineering requirements for the CAT&EAR system, a wearable, cybernetic auditory perception platform. CAT&EAR is a heterogeneous, phased-array auditory sensor suite that uses biomimetic design, ultrasonic telemetry, laser vibrometry, and advanced audio signal processing to enable real-time environmental awareness and communication.

              The system is designed to operate autonomously, using only open standards and open-source software, while supporting embedded AI-driven perceptual functions. This document reflects both functional and non-functional requirements for CAT&EAR across all relevant subsystems.

              The CAT&EAR system (Cybernetic Auditory Telemetry & Echolocation with Anisotropic Reception) is a next-generation, wearable auditory intelligence platform designed for advanced signal perception, real-time environmental awareness, and covert communication. It leverages a biomimetic ear design combined with a heterogeneous microphone array—featuring diverse diaphragms and directional acoustic profiles—to form an asynchronous, non-planar phased array. This allows it to isolate, enhance, and track psychoacoustically relevant audio sources with high spatial precision, even in noisy or cluttered environments. Real-time beamforming, signal classification, and source switching are handled off-device via open-source DSP pipelines, ensuring low latency (≤20 ms) and full operational transparency.

              In addition to traditional sound acquisition, CAT&EAR incorporates ultrasonic echolocation and laser vibrometry, enabling through-wall audio surveillance and remote surface vibration analysis using IR/RED laser beams with microprism-guided targeting. The system includes a covert ultrasonic communication channel, allowing encrypted, inaudible data exchange between units—ideal for non-verbal team coordination. Myoelectric sensors and sub-vocal command inputs provide silent, intuitive control interfaces for users, allowing manual beam steering or hands-free selection of tracked sound sources. Ear motion actuators and a visible/infrared laser pointer visually indicate attention direction, enhancing situational awareness without audible cues.

              Field usage scenarios include reconnaissance, electronic surveillance, remote eavesdropping, low-visibility communication, and audio-based environmental mapping in both urban and wilderness environments. The system is optimized for silent operation, rapid deployment, and open hardware integration. All processing occurs locally or on an open compute platform (e.g., Raspberry Pi), with no reliance on proprietary software or cloud infrastructure. With up to 12-channel live transcription, digital audio decoding, 4TB onboard storage, and support for all major codecs, CAT&EAR serves both tactical intelligence roles and high-end experimental research in audio-based perception systems.

              System requirements

              🎤 Microphone Array & Acoustic Sensors

              Mic Diaphragm Diversity – Use MEMS and larger diaphragm mics.
              Earshape Mic Placement – Place mics at parabolic acoustic focus points.
              Ultrasound Transceivers – Include US sensors for echolocation and darksight.
              Ultrasound Data Comms – Use US transceivers for covert device communication.
              Heterogeneous Phased Array – Mic array shall be non-planar and diverse.
              Frequency-Profile Diversity – Use mics with different frequency sensitivities.
              Anisotropic Reception – Account for directional response patterns from ear shape.
              Psychoacoustic Focus – Detect and prioritize perceptually relevant signals.
              Mic Cross-Correlation – Synchronize all mic data spatially and temporally.
              Real-Time Acoustic Map – Build a 3D sound map from multi-mic input.
              Mic Calibration Pattern – Provide physical pattern for mic array calibration.
              Open Hardware Calibration – Use only open hardware for calibration tools.

              🧭 Ultrasonic Subsystem

              Ultrasound Mapping – Perform echolocation for 3D environmental awareness.
              True Transceiver Mode – Ultrasound sensors must transmit and receive.
              Ultrasound Band Amp – Include amplifier suitable for US transmission.
              Covert US Communication – Transmit data in inaudible US band.
              Spatial Mapping via US – Derive positional data from ultrasound TOF.

              👂 Ear Shape & Actuation

              Dynamic Ear Focus – Use moving ear shapes to focus sound.
              Servo Actuated Ears – Use servos to reorient ears toward signals.
              Double Helical Gears – Use quiet gears for mechanical actuation.
              Acoustic Dampened Gears – Coat gears to suppress audible reflections.
              Ultrasonic Motors Preferred – Prefer silent ultrasonic motors over gears.
              Backwards Reception – Ears must support rearward sound reception.
              Flexible Ear Shaping – Shape ear surfaces dynamically for beam control.
              Motion-Linked Focus – Ear movement must track beamforming direction.
              No Mechanical Noise Leakage – Prevent gear vibration from reaching mics.

              📡 Communication & Connectivity

              Wi-Fi + BLE Only – Support open wireless; no DECT or proprietary links.
              Open Standard Protocols – Use only open protocols for communication.
              Multichannel Audio Streaming – Support multiple audio streams over network.
              Driverless Operation – Require no proprietary drivers for functionality.

              🧠 Software & Signal Processing

              Open Source Only – All code and firmware must be fully open source.
              Offloaded Processing – All signal processing handled off-device.
              Max 20ms Latency – Entire processing pipeline must be under 20ms.
              Live Beamforming – Perform real-time signal steering and separation.
              Real-Time Source Relevance – Continuously rank sources by importance.
              Noise vs Signal Detection – Separate noise from structured signals.
              Real-Time Source Switching – Auto-switch auditory focus intelligently.
              Plug-and-Play Sensor Config – Support hot-swappable or modular sensors.
              Onboard Sub-Vocal Control – Allow silent vocal commands for control.

              🎧 Earbuds / Audio Output

              Canal Mic Feedback Loop – Use in-ear mics for real-time output correction.
              Open Firmware Audio Chain – Use programmable amps with open firmware.
              Drive Adaptation by Feedback – Earbud output adjusts based on canal feedback.

              🎮 Control Interfaces

              Myoelectric Input Support – Accept muscle signals as control input.
              Manual Steering Mode – One ear for beam steering via myoelectric input.
              Signal Selection Mode – One ear for selecting tracked signal via input.
              Sub-Vocal Command Mode – Use throat activity to control high-level tasks.

              🔦 Laser Pointer System

              Dual Laser Module – Use both red (visible) and IR (covert) lasers.
              Laser Aiming Reflects Beam – Beam direction matches acoustic focus.
              IR Laser for Stealth – IR laser used to show focus discreetly.

              🧾 Transcription & Recognition

              Live Audio Transcription – Convert incoming audio to live text.
              12 Channel Transcription – Handle at least 12 simultaneous streams.
              MP3QR & Digital Decoding – Decode digital audio formats in real time.
              Live Text Summarization – Generate summaries from live transcripts.
              Voice Signature Tagging – Identify and label speakers in text.

              💾 Storage

              4TB Local Storage – Minimum onboard capacity of 4 terabytes.

              ⚖️ Physical & Power

              ≤150g Head Weight – Total device weight must not exceed 150g (no battery).
              1-Min No-Battery Buffer – Must operate 1 minute without battery power.
              Dual Battery Swap – Hot-swap batteries without power loss.

              🎛️ Audio Encoding & Codecs

              HW Audio Codec Support – Hardware encoder/decoder for all ffmpeg codecs. 🧱 System Design Principles
              Modular Hardware – All subsystems must be physically modular.
              User Privacy by Default – All processing must be local and secure.
              No Cloud Dependency – System must function entirely offline.
              Mainline Linux Support – Fully supported by Linux kernel and stack.
              Open Protocol APIs – All I/O and control must use open APIs.

              🛡️ Counter-Acoustic Defense

              Passive Threat Detection – Detect and localize hostile audio sources.
              Acoustic Counter-Battery – Track and indicate direction of intrusive signals.

              🧪 Fallback & Safety

              Failsafe Passive Mode – Fall back to passive listening if system fails.

              🌡️ Environment & Durability

              Passive Cooling Only – No fans; silent passive thermal control only.
              Water-Resistant Design – Use hydrophobic materials for exterior protection.

              🧰 Maintenance & Testability

              Open Test Fixtures – All testing hardware must be reproducible and open.
              Self-Test & Calibration – System must run periodic self-alignment checks.
              Community Repairable – Designed to be easily maintained by users.

              🔗 Licensing

              Fully Open Licensed – All hardware, firmware, and software must use open licenses.

              🔦 Laser Microphone Capability

              Laser Mic via Microprism – Dual laser system shall include a microprism to enable laser microphone functionality.
              Aimed Surface Listening – System shall capture audio from vibrating surfaces (e.g. windows, walls) via laser beam reflection.
              Covert Through-Wall Listening – IR laser + sensor must support long-range audio pickup from remote surfaces without line-of-sight audio.

              ✅ Summary

              Total Requirements: 76
              System Class: Wearable, cybernetic, audio perception platform
              Design Goals: Open-source, real-time, stealth-capable, user-repairable

    • Zachariah@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      Is it prehensile? This could lead to octopus-like abilities if you added more than one.