H2S123: getUserMedia-JS/ PHP/HTML/AJAX/MySQL live video transmission max.red-build:

Hu: This will be a Web.RTC-less simple | build that nevertheless will transmit video with ultra.low-latency, defined as 100-ms or less:

H3S1: Live.code-read: STAR TECH Web-RTC:

Hu: Without any idea of the structure yet, I will attempt to extract single lines of code # that I believe are relevant:

H4S1: 8:08 in pt-021:

mediaStream = await navigator.mediaDevices.getUserMedia(mediaConst);

H5S1: await<built-in>:

Mozilla<a-r>: The await | operator is used to wait for a Promise and get its fulfillment | value. It can only be used inside an async function or at the top | level of a module.

H6S1: Promise<elab>:

The Promise | object represents the eventual | completion (or failure) of an asynchronous | operation and its resulting | value. A Promise is a proxy for a value not necessarily known when the promise is created. It allows you to associate handlers with an asynchronous action’s eventual success value or failure reason. This lets asynchronous methods return values like synchronous methods: instead of immediately returning the final value, the asynchronous method returns a promise to supply the value at some point in the future. A Promise is in one of these states:

  • pending: initial state, neither fulfilled nor rejected.
  • fulfilled: meaning that the operation was completed successfully.
  • rejected: meaning that the operation failed.

Post: see-more: <accounts-receivable><ch-19><future-write><creative!> Hu: An event is just a fancy, objectlevel | pseudo-syn for a return value; both are conditional | outcomes of some method, acting upon properties, or arguments, of a function. Mozilla: The eventual | state of a pending promise can either be fulfilled with a value or rejected with a reason (error). A promise is said to be settled if it is either fulfilled or rejected, but not pending.

A local villager’s sketch of how this might work, on some level.<r: Mozilla>

Mozilla: You will also hear the term resolved used with promises — this means that the promise is settled or “locked-in” to match the eventual | state of another | promise, and further resolving or rejecting it has no effect. 

H6S2: async function:

The async function declaration declares an async function where the await keyword is permitted within the function body. The async and await | keywords enable asynchronous, promise-based behavior to be written in a cleaner | style, avoiding the need to explicitly configure promise chains.

async function foo() {
  await 1;
function foo() {
  return Promise.resolve(1).then(() => undefined);

Hu: Mozilla considers the preceding 2 code blocks to be equivalent. Based on this experimental outcome: Mozilla: Code after each await expression can be thought of as existing in a .then | callback. In this way a promise chain is progressively constructed with each reentrant step through the function. The return value forms the final link in the chain.

H5S2: navigator:<Mozilla, a-r>: The Navigator interface represents the state and the identity of the user agent. It allows scripts to query it and to register themselves to carry on some activities. Hu: navigator.mediaDevices is supported by all | browsers. H6S1: Navigator.mediaDevices<Read-only> Returns a reference to a MediaDevices | object which can then be used to get | information about available media devices (MediaDevices.enumerateDevices()), find out what constrainable | properties are supported for media on the user’s computer and user agent (MediaDevices.getSupportedConstraints()), and to request access to media using MediaDevices.getUserMedia().

H5S3: getUserMedia:<WP.MIC-H2S107,H3S3.H4S1> returns a Promise that resolves to a MediaStream object. The mediaConst variable passed as an argument # was defined earlier in lines 15-7:

const mediaConst = {
     video: true

H5S4: Running-script:

Clicking on a call button in the interface built by STAR-TECH will prompt user to grant camera permission, due to the code in this section.

Hu: At this point, according to STAR-TECH, the camera is already streaming inside the browser; however, it’s not displayed yet, which will occur, after he creates the connection, in his workflow.

H4S2: 5:03 p-029:

Hu: We have to jump to 5:03 of P-029 to find the next usable | element:

<video id="remoteVideo"
<video id="localVideo"

H5S1: querySelector:

Hu: STAR-TECH uses the JSquerySelector built-in to create a JS-variable that<Mozilla>: returns the first | Element within the document that matches the specified | selector, or group of selectors. If no matches are found, null is returned. This is a metacontrol factor, of the HTML.embed-element, that we do not need to adopt, for the core functionality of passing and displaying the video | feed.

H5S2: Video.stream-source | unknown:

Hu: P-029 explicitly covers displaying the video-stream, but logically misses where it came from, or how it was received by the displaying | client.

H5S3:<video> tag in HTML:<Mozilla>: The <video> HTML element embeds a media | player which supports video | playback into the document. You can use <video> for audio content as well, but the <audio> element may provide a more appropriate | user experience. W3<a-r>: There are three supported | video formats in HTML: MP4, WebM, and OGG.

autoplayautoplaySpecifies that the video will start playing as soon as it is ready
controlscontrolsSpecifies that video controls should be displayed (such as a play/pause button etc).
heightpixelsSets the height of the video player
looploopSpecifies that the video will start over again, every time it is finished
mutedmutedSpecifies that the audio output of the video should be muted
posterURLSpecifies an image to be shown while the video is downloading, or until the user hits the play button
Specifies if and how the author thinks the video should be loaded when the page loads
srcURLSpecifies the URL of the video file
widthpixelsSets the width of the video player

W3: The<video>tag supports global | attributes and event attributes<WP.MIC-H2S55, a-r>.

H5S4: Detecting track addition and removal

<Mozilla>: You can detect when | tracks are added to and removed from a <video> element using the addtrack and removetrack | events. However, these events aren’t sent directly to the <video> element itself. Instead, they’re sent to the track list object within the <video> element’s HTMLMediaElement that corresponds to the type of track that was added to the element.

H5S5: HTMLMediaElement: <Mozilla>: The HTMLMediaElement interface adds to HTMLElement the properties and methods needed to support basic media-related capabilities that are common to audio and video. The HTMLVideoElement and HTMLAudioElement elements both inherit this interface.

via Mozilla.

H5S6: Serving the video | track:

Hu: Traditionally, video files are served to the browser by Apache web server<ref: <video>element, via Mozilla>, and are stored in the server | files or db.

H5S7: Sample code<W3>:

<!DOCTYPE html><html><body>
<h1>The video element</h1>
<video width="320" height="240" controls>
  <source src="movie.mp4" type="video/mp4">
  <source src="movie.ogg" type="video/ogg">
  Your browser does not support the video tag.

H4S3: 5:18 in P-029 and 5:56 in P-021:

// line-43 in main.js
localVideo.srcObject = mediaStream; 

This line attaches the srcObject property<a-r><Mozilla>: of the HTMLMediaElement | interface sets or returns the object which serves as the source of the media associated with the HTMLMediaElement. Hu: Due to the action of querySelector in H4S2-H5S1, var localVideo is connected to HTML.element-ID = ‘localVideo’; this line establishes JS var-mediaStream, which was defined in H4S1, as the source of the media.

H5S1: srcObject property:<Mozilla>: Value: a MediaStreamMediaSourceBlob, or File object (though see the compatibility table for what is actually supported). Basic example: In this example, a MediaStream from a camera is assigned to a newly-created <video> | element.

const mediaStream = await navigator.mediaDevices.getUserMedia({video: true});
const video = document.createElement('video');
video.srcObject = mediaStream;

H3S2: The build:

H4S1: Client-send.php:

const mediaStream = await navigator.mediaDevices.getUserMedia({video: true});
const video = document.createElement('video');
video.srcObject = mediaStream;

Snippet # via<Mozilla, a-r>: Hu:

H4S2: Server.write.php:

H4S3: Server-read.php:

H4S4: Client-display.php:

H3S3: Calculating the time.size-per,frame:

Hu: at 30 fps, a frame is 33-ms; setting this as the payload size, our delay will be 33-ms, if the instant of finished recording, is the instant that the recipient can start viewing. The cli-ser-TURN-ser-cli travel path, we calculated in H2S124, covers 1-mn meters, which the speed of light<Turing><#n-p> can cover in 1/300s, or ~3-ms, trivial, compared to the payload size, for a total latency of 36-ms, an industry | standard. At 3 frames per payload, we would still be hovering at 100-ms, which I established is the threshold for ultra.lowlatency.

H3S4: The Tests:

Hu: Since I’m still in the process of scoping the feasibility of this implementation, let’s get to the critical | tests early, because these will make or break.

H4S1: Browser web-cam access and self-view test:

Hu: I will set up a basic test to seek JS-perms from browser to access web-cam, using H5S1: navigator.mediaDevices.getUserMedia(constraints), and H5S2: the<video>tag in HTML, with H5S3: srcObject<r: H3S1, H3S5>.


<button type="button" onclick="cam_on2()">Cam-on</button>
<script>function cam_on() {
	let constraintObj = { 
            audio: false, 
            video: true, 
function test_write() {
function cam_on2() {
	let constraits = {
		video: true,
  .then((stream) => {
    /* use the stream */
  .catch((err) => {
    /* handle the error */

<!-- http://flare/testing-progress/MediaDevices/cam-display.php -->

H5S1: The above code spits to console<Inspect->console, after running page>: according to the shoddy | English at<rollbar, a-r>this can be taken to mean that the variable passed into getUserMedia, in my MVP.dev-1, constraints, is improperly | defined. Revisiting docs:

H5S2: Getting a better hang of JS.error-handling<anthro!>, there’s an added layer PHP doesn’t get, clicking on the blue text in the previous screen, tabbing over to<Inspect->Sources>iso-spots the error, in the red-x:

H5S3: <Mozilla, MediaDevices.getUserMedia() a-r>: This feature is available only in secure contexts (HTTPS). Hu: As an involuntary.z-zug<sued>, I need to figure out how to run HTTPS on WAMP<a.r-interlude><fbno><WP.MIC-H2S129>

via<Laravel.Package-Tutorial a-r>

H5S4: After completing H2S129 and trying the HTTPS-protocol #here:

<button type="button" onclick="cam_on()">Cam-on</button>
<script>function cam_on() {
  audio: true,
  video: { width: 1280, height: 720 }

Hu: With this simplified | code, I was able to # grant | permission to microphone and camera, with no display yet; also, my web-cam gives feedback: it’s not yet on.

H6S1: brave://settings/content#media-stream-mic: At this URL, and in Chrome, you can reset permissions for flare to have access to MIC-cam, and change.

H6S2: Remember my decision: if set to “forever”, or a time-frame, you can avoid seeing the permission pop-up, every time you click Cam-on, after returning to site.

H6S3: This small video icon in the URL-bar displays cam-engaged status, which we can use for feedback, after perm-forever<Turing>

H6S4: I was able to get a basic cam-on test published<WP.MIC-H2S131,H3S1.H4S4>

Implications: This test is necessary | regardless of our next path, as it’s a LUCA<Turing><ch.32-H2S4>of all | possible implementations. I’m committed to capturing web-cam from the browser, as I am building the first desktop application that uses the browser as the UI.


Hu: Note that this test is not experimental | I have seen quite a few working cases online already, so I’m simply testing to determine the correct implementation technique, as a form of self-training.

H4S2: Test a 1/30s media-stop function run on a browser API recording:

Hu: This test will tell us what happens to an mp4 file, when it’s cropped at the rate of 30 crops per second. We will simply examine these files, on our local computer, and possibly, chain the end-to-end, using a video editor like Sony Vegas to scope the watchability of these samples<Turing>.

Code: //

Implications: This will be first in a series of tests, that will scope the feasibility of our split-MP4 implementation, as a counter to Web-RTC. In order for this to work, the recording and cropping have to be instant, causing no further delay, than what is inherent in post-delivery of the payload; the files must be uncorrupted, despite the short crop.time-line, and there cannot be an overage of missing frames, when linked together, causing U.X-disruptions. None of this will predict what will happen, after the files are delivered, including remotely, however.

H4S3: // cont:

Since H4S2: is a critical | test of mp4 as a platform, we cannot yet make a determination of H4S3, although its failure, does not rule out the possibility of H4S3 as a continuation. We decide it’s best, mental.capactiy-wise<WP.MIC-H2S19>to reserve flexibility in this space for now^2.

H4S4: Error-catching:

.catch(e => { console.error('getUserMedia() failed: ' + e); });

<miguelao, a-r>: An interesting catch implementation that involves no confusing brackets, and stacks on well with a single .then line above; this modular error-catch is intuitive, and is one I’m willing to consider implementing to<Github, a-r><WP.MIC-H2S131,H3S1.H4S4-H5S1,H6S4>

H4S5: Generation of an objective remote.blob-url & remote-playback:

Next test: I need to determine whether the generated blob URL can be read remotely, at any speed. Start by trying on the local file, to confirm 100% that I have the correct URL setup, and that the blob can be displayed.<WP-Buffalo><11/29>: FANTASTIC news: I have successfully reproduced remote replay of blob, and this represents the max-red replication of WebRTC:

H5S1: cam.record-URL,create.php:

<p><video autoplay="true" id="videoElement"></video>
<video autoplay="true" id="replayElement"></video></p>
<button type="button" id="start" onclick="cam_on()">Cam-on</button>
<button type="button" onclick="console_log()">Console-log</button>
<button type="button" onclick="start_record()">Start-record</button>
<button type="button" onclick="stop_record()">Stop,replay</button>
<script> var video = document.querySelector("#videoElement");
var replay = document.querySelector("#replayElement");
function cam_on() {	stream = navigator.mediaDevices.getUserMedia({ video: true });
	stream.then(function (value) { video.srcObject = value; }); }
function start_record() {
	stream.then(function (value) { mediaRecorder = new MediaRecorder(value); 
	// The next line will already create a blob, so we need not additional handling.
function stop_record() {
	// We need to stop the recording, and use ondataavailable to push replayElement.srcObject<Turing>
	mediaRecorder.ondataavailable = (e) => {
		replay.src = window.URL.createObjectURL(e.data); 	
		// replay.srcObject = e.data; 	
		// console.log(e.type);
function console_log() {
	console.log(typeof mediaRecorder);
	console.log(typeof stream);

Building off start_record<WP.MIC-H2133,CM.H4S2>, which created a mediaRecorder-obj, and changed the mediaRecorder.state to ‘recording’, this function stops the recording, the action of which # fires the ondataavailable.event. The data from the blob, containing the media recorded, which was created with .start, and wrapped up with .stop/event-firing, can be captured with the .data property of the dataavailable | event representer, which can be called ‘e’. Name not mine. == The captured blob is in the statement (e.data); next, I create a URL wrapper for this obj using createObjectURL, and assign this URL to replay.src, an HTML<video>element previously defined. This permits HTML to render this blob as an mp4 video and stream it as the media source<WP.MIC-H2S131>.

H5S2: record.play-remote.php: The URL, created by createObjectURL, for the blob, that contains the media source, can also be pulled remotely, into an HTML tag, on another script:

<p><video autoplay="true" id="replayElement"></video></p>
<button type="button" id="start" onclick="play_remote()">Play-remote</button>
var replay = document.querySelector("#replayElement");
function play_remote() {
	replay.src = 'blob:https://flare/dcb5ce61-9dd7-42b3-95c5-e142029374e5'; 	
	// console.log(e.type);

Hu: The blob-url is hard-coded here, based on the console.log in H5S1, for testing of the possibility<Turing> # H6S1: From the test-log #: The persistence of the blob is, it looks like<80%> to the end of the session of the other script, indicating that JS has some
session-tracking built-in, but not transparently. This also raises questions, if the total recording is to be saved. H6S2: This was tested, which validates that I now, in this blob URL:


as a string, have the representation of a payload, that can be ported, including across the private or public net, to another device. == It will be straightforward from here # to add an AJAX.PHP-setup that can write this URL to a MySQL-tb, by POST’ing the URL to a server-script that can UPDATE, and reading, with setInterval, on the other side, which has its own complementary server-AJAX setup, just as I already set up, with the text.only-exp:<flare, a-r> H6S2: Further testing directions: H7S1: Currently working through the logic of the MySQL action, to get a latency delivery of <100-ms here at <H3S7> H7S2: But what is more pressing, is building an AWS-TURN that can validate the web.transport-capability:<WP.MIC-H2S122>.

H4S6: MVP-testing:

<WP.MIC-H2S62>Testing segmentation: Local one instant, local speed, long one instant, long speed. Build the set interval feed into // Write to the MySQL db in a new test column with the index implementation to start, go big as bit, new table. Next: figure out the remote URL access to blob as the first step for long, and manual feed to start. Then, write the remote db-conn, and push to that H2S, mainly, with ref, and code pull. I will not be using XMLHTTPRequest to send the blob; I will try to access the blob directly using the URL. Problem: the receiving script will need to setInterval across internet? No: let’s make the db-UPDATE go across the Internet with a remote form blob URL, but the SELECT still checks a local db<1/(1*10^40)> Main challenges: remote blob URL, remote db-conn and update, setInterval speed effect on video playback and as a consequence possibly ordering but less so, ordering, if speed can be maintained. Publish: 60s iPhone video of my screen on the stand demonstrating various cases, dual window, ultra low latency point out, all code, pointers, and a small code showcase in video. URL format:


Therefore, the remote URL needs to be in the form:


Note: I can’t test remote until I create port forwarding for 443 and in both the router and Windows inbound firewall, I believe, and these may be among other changes.

H5S1: Automatic set.interval-playback in remote file, 1 clip, local-server:

Hu: At minimum, this implementation<Ukraine!>requires a setInterval function that reads a MySQL-grid for the latest | blob-URL, whereas previously, in<H4S5-H5S2>, the URL-feed into replay.src was hardcoded. The test in H5S1 will be only for record.play-remote,auto.php and blob.URL-SELECT,server, a newly generated file for this test, continuing<fbno>Both the MySQL SELECT and the blob-URL, in this test, will be local.

H6S1: record.play-remote,auto.php:

H6S2: blob.URL-SELECT,server:

H6S3: Manual | clock test: I will test the latency for playback, in the single video, between time of stop record and start, and between that time, for a fast-ended clip # Note that the UPDATE of the blob-URL is not automatic, at this phase.

H3S5: Capturing and Saving User Audio or Video with JavaScript: Steve Griffith<a-r>:

H4S1: 9:43:

Line 84: let mediaRecorder = new MediaRecorder(mediaStreamObj)

Where mediaStreamObj is the argument passed into the function in line 65, which includes the variable declaration.

Griffith: This is the media stream recording API, so it’s a second API we create a media recorder object and we pass that stream in. We still have to do this: getUserMedia, because this is the thing that gets the permissions and it gets the stream; we are feeding that video stream into this recorder object, or we’re connecting the two of them. We haven’t told it to start recording yet, we’ve just said: okay, this is the stream that you’re going to be listening to, I’m gonna create an array. This is where I’m gonna put the data, so as I’m recording, or at once I tell it to start recording, it’s gonna be feeding data into this array. When it’s done, we’re gonna take the contents of this array, turn it into a blob, and then put it into that second video tag.

H5S1: Line 88:


Hu: This line initiates the media recording:<Mozilla>: The MediaRecordermethod start(), which is part of the MediaStream | Recording API, begins recording media into one or more Blob objects.

H6S1: Blob | objects:

<Mozilla>: You can record the entire | duration of the media into a single | Blob (or until you call requestData()), or you can specify the number of milliseconds to record at a time. Then, each time that amount of media has been recorded, an event will be delivered to let you act upon the recorded media, while a new Blob is created to record the next slice of the media.<Mozilla-2>: The Blob | object represents a blob, which is a file-like | object of immutable, raw | data; they can be read as text or binary | data, or converted into a ReadableStream so its methods can be used for processing the data.


<Mozilla>: Returns a newly | created Blob object which contains a concatenation of all of the data in the array passed into the constructor. Immutable<adj><Ox-lang>: unchanging over time or unable to be changed.

H7S1: requestData():<Mozilla>: the MediaRecorder.requestData() method (part of the MediaRecorder API) is used to raise a dataavailable event containing a Blobobject of the captured | media as it was when the method was called. This can then be grabbed and manipulated as you wish. Example:

captureMedia.onclick = () => {

Explainer: // makes snapshot available of data so far // ondataavailable fires, then capturing continues in new | Blob.

H7S2: ReadableStream:

The ReadableStream interface of the Streams API represents a readable | stream of byte data. The Fetch API offers a concrete | instance of a ReadableStream through the body property of a Response object. ReadableStream is a transferable object.

H5S2: Line 92:


<Mozilla>: The MediaRecorder.stop() method (part of the MediaRecorder API) is used to stop media capture. When the stop() method is invoked, the UA queues a task that runs the following steps:

  1. If MediaRecorder.state is “inactive”, raise a DOM InvalidState error and terminate these steps. If the MediaRecorder.state is not “inactive”, continue on to the next step.
  2. Set the MediaRecorder.state to “inactive” and stop capturing media.
  3. Raise a dataavailable event containing the Blob of data that has been gathered.
  4. Raise a stop event.

H6S1: MediaStream Recording API<a-r>;

<Mozilla>: The MediaStream Recording API, sometimes referred to as the Media Recording API or the MediaRecorder API, is closely | affiliated with the Media Capture and Streams API and the WebRTC API. The MediaStream Recording API makes it possible to capture the data generated by a MediaStream<WP.MIC-H2S107,H3S1> or HTMLMediaElement object for analysis, processing, or saving to disk. It’s also surprisingly | easy to work with.

The MediaStream Recording API is comprised of a single major interface, MediaRecorder, which does all the work of taking the data from a MediaStream and delivering it to you for processing. The data is delivered by a series of dataavailable events, already in the format you specify when | creating the MediaRecorder. You can then process the data further or write it to file as desired. The process of recording a stream is simple:

  1. Set up a MediaStream or HTMLMediaElement (in the form of an <audio> or <video> element) to serve as the source of the media data.
  2. Create a MediaRecorder object, specifying the source stream and any desired options (such as the container’s MIME type or the desired bit rates of its tracks).
  3. Set ondataavailable to an event handler for the dataavailable event; this will be called whenever data is available for you.
  4. Once the source media is playing and you’ve reached the point where you’re ready to record video, call MediaRecorder.start() to begin recording.
  5. Your dataavailable event handler gets called every time there’s data ready for you to do with as you will; the event has a data attribute whose value is a Blob that contains the media data. You can force a dataavailable event to occur, thereby delivering the latest sound to you so you can filter it, save it, or whatever.
  6. Recording stops automatically when the source media stops playing.
  7. You can stop recording at any time by calling MediaRecorder.stop().

H3S6: How to Record Video and Audio From Camera Using MediaRecorder WebRTC API in Javascript Full Project <Coding-Shiksha>: Live.code-read:

H4S1: Interface overview<fbno>:

Hu: My confidence in this implementation has increased from 40%, after watching Griffith, to 70%, at this point. At 90%, I’ll start building a demo.

H3S7: MySQL-PHP short.polling-logic<Turing>for serving blog-url into video.src<Turing-dirty>

18. Qxf6+!! Kxf6 19. Nd5+<royal-fork> https://lichess.org/JvbDIvJ6/white#34 or Nf5+!! Kh8 19. Qh4! Nh5<forced> ==

The timeslice<33-ms>MediaRecorder.start() sets the timing for pushes, but it’s the firing of dataavailable that will trigger the chain of events, from sender side, which will 1) save a blob 2) createObjectURL and 3) write the URL from 2) into MySQL. This concludes sender side actions.

The recipient side will be polling this db, during the call, at 30/s to grab the least recent unread URL<Turing>. The poll, if there is a read, will SELECT this URL by its index from the table, shared between the 2 users, and DELETE the URL, immediately after. Details later. 

Once the blob URL is retrieved, it can be fed to JS #, which will associate the string of that URL with a src of<video>I can start with my XMLHttp AJAX setup with setInterval(33), and that PHP file # SELECTs and DELETEs from tb, returning the blob.url-str. Indexing logic: the sender side needs to +1 to latest index and the recipient side needs to pull the highest # there’s a small chance the db may be emptied, so the sender side, which also will be using AJAX to send the URL to MySQL<cli-ser-ser-cli>Not sure exactly what the effect will be on the recipient side, as a result of the q.tum-tuned timings; the polling logic assured that there will be a mean milliseconds probabilistic delay which should cover most micro-delays and adjustments<Turing> from the sender side. A 1-ms delay, depending on the margin, can also be built-in manually to cover most blackouts<Turing-dirty>Indexing-cont: the sender side will track a count, recycling my AJAX / HTTP / PHP counter from math-games, but the incrementer will also be triggered by dataavailable, rather than a setInterval. This count will be written from the PHP file into the index column of MySQL. The recipient side will pull the entire table each time and select only the lowest number, and it will read 1 row per SELECT, for now<efficient-sufficient><fbno><Turing>


021 is the starting point of our concern, mostly, because we will not be using Web-RTC here.













Steve Griffith – Prof3ssorSt3v3 82.7-k subs









AT&T Tech Channel 80.4K subscribers 1976
Coding Shiksha 26.5K subscribers 7/20

^ https://webninjadeveloper.com/javascript/javascript-mediarecorder-webrtc-api-project-to-record-video-and-audio-from-camera-in-browser/

Josh Reiss: Professor of Audio Engineering, Queen Mary University of London








OpenSSL x Localhost x WAMP:


^ Associated text tutorial: https://infyom.com/blog/how-to-enable-localhost-https-ssl-on-wamp-server



OpenSSL general info, playlist: https://www.youtube.com/playlist?list=PLgBMtP0_D_afzNG7Zs2jr8FSoyeU4yqhi by Cyber Hashira, India.

Error catching:


Leave a Reply

Your email address will not be published. Required fields are marked *