H2S127: Video type recommendation for web transfer:

Hu-red<MIC>: 🔑 Asynchronous video is delayed delivery for differential thought bend timelines<biochem>

CM H4S1: Bit rate recommendations:

<Mozilla, “Codecs Web-RTC”>: Given a 20 millisecond frame size, the following table shows the recommended bit rates for various forms of media.

Media typeRecommended bit rate range
Narrow-band speech (NB)8 to 12 kbps
Wide-band speech (WB)16 to 20 kbps
Full-band speech (FB)28 to 40 kbps
Full-band monaural music (FB mono)48 to 64 kbps
Full-band stereo music (FB stereo)64 to 128 kbps
via Mozilla<a-r>:

The bit rate may be adjusted at any time. In order to avoid network congestion, the average audio bit rate should not exceed the available network bandwidth (minus any other known or anticipated added bandwidth requirements).

H3S1: Bit.rate-consideration:

Hu: Our goal is a peak minimum of 200 kbps<Turing>:

H3S2: Resolution:

Hu: From experience on YouTube, I can attest: 480-p is the min for standard high, 144-p is the minimum for distinguishing human emotions on faces, 720-p is a reasonable high-def. Anything above that is a super.MVP-luxury.

H3S3: Frame-rate:

Hu: 30-fps here is preferred; 60-fps may be nauseating at times, and anything below 15 would be significantly uncomfortable.

H3S4: Codec:

H3S5: Encoding images to binary<Turing>:

H4S1: Representing color:

<Mozilla a-r>: Representing the colors in an image or video requires several | values for each | pixel. What those values are depends on how you “split up” the color when converting it to numeric form. There are several color | models, and video codec makes use of one or more of these to represent their pixels during the encoding | process as well as after decoding the video | frames. There are two primary methods used to represent RGB samples: using integer components and using floating-point components. When using integer components, RGB color uses 8 bits each of red, green, and blue, as well as potentially 8 bits of alpha (the amount of transparency).

H5S1: Grey.scale-asymmetry: Because the eye has vastly more rods than cones (about 120 million rods to around 6 or 7 | million | cones), we see detail in greyscale, with color being far less detailed. In essence, our eyes are like a camera with two image sensor chips: one greyscale and one color. The greyscale sensor is 120 megapixels, while the color sensor is only about 7 megapixels

H5S2: Cone-types: There are three types of cones, each of which responds to a particular wavelength band of incoming light, but also to the intensity of the light at that wavelength. Each type of cone, then, captures the relative response peaks at various wavelengths, and the brain uses this data to figure out the intensity and hue of the color of the light arriving at that part of the retina.

Post: Hu’s Law of Encoding: The more data you plan to transfer, the more data you can initially justify in establishing the zip encoding algorithm, at the endpoints.











Media Type Registration of RTP Payload Formats: https://datatracker.ietf.org/doc/html/rfc4855
RTP Payload Format for H.264 Video: https://datatracker.ietf.org/doc/html/rfc6184, https://datatracker.ietf.org/doc/html/rfc6184#page-45
RTP Payload Format for VP8 Video
WebRTC Video Processing and Codec Requirements: https://datatracker.ietf.org/doc/html/rfc7742
Negotiation of Generic Image Attributes in the Session Description Protocol (SDP): https://datatracker.ietf.org/doc/html/rfc6236

Leave a Reply

Your email address will not be published. Required fields are marked *