WebRTC (Web Real-Time Communication) is your browser built-in technology with the aim to simplify developing applications using audio and video streams. You can easily build your own Skype in the browser using WebRTC. So the simple idea is that you open up a website and connect with another user immediately. WebRTC API includes camera and microphone capture, video and audio encoding and decoding, transportation layers, and session management.
Under the hood, WebRTC leverages a basic peer-to-peer connection between two browsers. Lots of apps today use peer-to-peer capabilities, such as file sharing, text chat, and others.
There are 3 browsers which support WebRTC out-of-the-box – Chrome, Firefox, and Opera. You can check browser compatibility at http://caniuse.com/#search=webrtc.
Later and then I assume you are using Chrome, Firefox or Opera. As for me, I’m using Chrome for WebRTC tests. To discover the possibilities of WebRTC navigate your browser to https://opentokrtc.com, enter a room name, click “join” and “allow”. You must be able to see yourself. Then reopen this page in a new tab and enjoy your new friend 🙂
To sum up it is a worth learning technology that brings rich media into your browser.
The first WebRTC app
Let’s start with obtaining a live video and audio stream from a user’s webcam and microphone. We will use the getUserMedia
API. It is also known as MediaStream
API. One of the requirements for working with media APIs is having a server to host HTML and JS files. Opening up the files by double-click will not work.
Our first WebRTC app will be simple. It will show a video
element on the screen, ask user to use the camera and show live video stream.
Create a file named index.html
:
1 2 3 4 5 6 7 8 9 10 11 |
<!DOCTYPE html> <html> <head lang="en"> <meta charset="UTF-8"> <title></title> </head> <body> <video autoplay></video> <script src="main.js"></script> </body> </html> |
Create main.js
in the same folder:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
//check if the browser supports WebRTC function hasUserMedia(){ return !!(navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia); } //video and audio options of our stream var constraints = { video: { mandatory: { minWidth: 640, minHeight: 480 } }, audio: true } //if the browser supports WebRTC if(hasUserMedia()){ //getting getUserMedia function depending on the browser navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia; //asking user if we can use his webcam and microphone navigator.getUserMedia(constraints, function(stream){ //stream - our stream from webcam var video = document.querySelector('video'); //inserting our stream into video tag video.src = window.URL.createObjectURL(stream); }, function(err){}); } else { alert("Sorry, your browser does not support getUserMedia"); } |
Refresh your page, click “allow” and you should see your face.
The second app
We can configure the stream using the first parameter of the getUserMedia
API. For example to turn off the video stream and remain only audio we can use:
1 2 3 |
navigator.getUserMedia({ video: false, audio: true }, function (stream) { // now browser ask only for microphone support }); |
There are also other params which can constrain our stream. For example, you might want the mobile phone users only to capture a 480×320 resolution and desktop users 1024×768 resolution and 16:9 aspect ratio.
Create index.html
:
1 2 3 4 5 6 7 8 9 10 11 |
<!DOCTYPE html> <html> <head lang="en"> <meta charset="UTF-8"> <title></title> </head> <body> <video autoplay></video> <script src="main.js"></script> </body> </html> |
Add main.js
file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
//check if the browser supports WebRTC function hasUserMedia(){ return !!(navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia); } //desktop user constraints var constraints = { video: { mandatory: { minAspectRatio: 1.777, maxAspectRatio: 1.778 }, optional: [ { maxWidth: 1024 }, { maxHeight: 768 } ] }, audio: true } //if this is mobile device if(/Android|iPhone/i.test(navigator.userAgent)){ //mobile device constraints constraints = { video: { mandatory: { maxWidth: 480, maxHeight: 320 } }, audio: true } } if(hasUserMedia()){ //getting getUserMedia object depending on the browser navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia; //asking user if we can use his webcam and microphone navigator.getUserMedia(constraints, function(stream){ var video = document.querySelector('video'); //inserting our stream into video tag video.src = window.URL.createObjectURL(stream); }, function(err){}); } else { alert("Sorry, your browser does not support getUserMedia"); } |
Now resolution on the mobile phone should be smaller than on the desktop. There are more constraints at https://tools.ietf.org/html/draft-alvestrand-constraints-resolution-03.
The 3rd app
Sometimes there are more than one camera or microphone on the user’s device. With MediaSourceTrack
API we can ask a list of available devices and select the one we need.
index.html
1 2 3 4 5 6 7 8 9 10 11 |
<!DOCTYPE html> <html> <head lang="en"> <meta charset="UTF-8"> <title></title> </head> <body> <video autoplay></video> <script src="main.js"></script> </body> </html> |
main.js
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
//getting info about available audio and video devices MediaStreamTrack.getSources(function(sources){ var audioSource = null; var videoSource = null; for(var i = 0; i < sources.length; ++i){ var source = sources[i]; if(source.kind === "audio"){ console.log("Microphone found:", source.label, source.id); audioSource = source.id; } else if(source.kind === "video"){ console.log("Camera found:", source.label, source.id); videoSource = source.id; } else { console.log("Unknown source found:", source); } } var constraints = { audio: { optional: [{sourceId:audioSource}] }, video: { optional: [{sourceId:videoSource}] } }; //asking user for webcam and microphone access navigator.webkitGetUserMedia(constraints, function(stream){ var video = document.querySelector("video"); video.src = window.URL.createObjectURL(stream); }, function(err){ console.log("Raised an error when capturing:", error); }); }); |
Open the page and see the console output.
demo(Chrome only)
The 4th app
Let’s create an app which will capture a video frame, apply different effects on this picture, add some text to it and draw it on the web page. We will use Canvas
API.
index.html
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
<!DOCTYPE html> <html> <!DOCTYPE html> <html> <head lang="en"> <meta charset="UTF-8"> <title></title> <style> video, canvas { width: 480px; height: 320px; } .sepia { -webkit-filter: sepia(1); -moz-filter: sepia(1); -ms-filter: sepia(1); -o-filter: sepia(1); filter: sepia(1); } .invert { -webkit-filter: invert(1); -moz-filter: invert(1); -ms-filter: invert(1); -o-filter: invert(1); filter: invert(1); } </style> </head> <body> <video autoplay></video> <canvas></canvas> <button id="takePhoto">take photo</button> <script src="main.js"></script> </body> </html> |
main.js
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
//check if the browser supports WebRTC function hasUserMedia(){ return !!(navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia); } if(hasUserMedia()){ //getting getUserMedia object depending on the browser navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia; var video = document.querySelector('video'), canvas = document.querySelector('canvas'), streaming = false; //asking user for webcam and microphone access navigator.getUserMedia({ video: true, audio: true }, function(stream){ video.src = window.URL.createObjectURL(stream); streaming = true; }, function(error){ console.log("Raised an error when capturing:", error); }); //when click on "take photo" button document.querySelector("#takePhoto").addEventListener('click', function(event){ if(streaming){ canvas.width = video.clientWidth; canvas.height = video.clientHeight; var context = canvas.getContext('2d'); //insert video frame into canvas context.drawImage(video, 0, 0); } }); //filter support var filters = ['', 'sepia', 'invert'], currentFilter = 0; //when clicking on the video document.querySelector('video').addEventListener('click', function(event){ if(streaming){ canvas.width = video.clientWidth; canvas.height = video.clientHeight; var context = canvas.getContext('2d'); context.drawImage(video, 0, 0); //apply css filters currentFilter++; if(currentFilter > filters.length - 1) currentFilter = 0; canvas.className = filters[currentFilter]; //write text on the canvas context.fillStyle = "white"; context.fillText("This is text!", 10, 10); } }); } else { alert("Sorry, your browser does not support getUserMedia"); } |
Now of you click “take photo” button you will capture video frame into the canvas. If you click on the video itself you will apply different photo effects to the picture.
That’s all for today 🙂
I see you don’t monetize your page, don’t waste your traffic, you
can earn extra cash every month because you’ve got hi quality content.
If you want to know how to make extra bucks, search for: Mertiso’s tips best
adsense alternative
I see you don’t monetize your website, don’t waste your
traffic, you can earn additional bucks every month.
You can use the best adsense alternative for any type of website
(they approve all websites), for more info simply search in gooogle: boorfe’s tips monetize your website