File Web Share Target
I’ve frequently said that for web apps to compete effectively in the world of apps, they need to be integrated in to all of the places that users expect apps to be. Inter-app communication is one of the major missing pieces of the web platform, and specifically one of the last major missing features is native level sharing: Web apps need to be able to get data out of their silo and into other web sites and apps; they also need to be able to receive the data from other native apps and sites.
Testing-file-share-target-from-camera
This is testing sharing directly from the camera app. It looks like it worked :)
testing-file-share-target
This is a test of the Share Target API on Android and it’s ability to share files. If you see something here, then all is good :)
Ricky Mondello: Adoption of Well-Known URL for Changing Passwords — ⭐
Ricky Mondello over on the Safari team just recently shared a note about how Twitter is using the ./well-known/change-password spec.
I just noticed that Twitter has adopted the Well-Known URL for Changing Passwords! Is anyone aware of other sites that have adopted it?
Twitter’s implementation: https://twitter.com/.well-known/change-password; Github’s: https://github.com/.well-known/change-password; Specification :https://github.com/WICG/change-password-url
The feature completely passed me by but it is a neat idea: given a file in a well-known location, can the browser offer a UI to the user that allows them to quickly reset their password without having to navigate the sites complex UI..
The spec is deceptively simple: the well-known file simply contains the URL to direct the user to when they want to perform the action. This lead me in to thinking, can we offer more of these features:
- A well known location for GDPR-based consent models (cookie consent) - site owners could offer a link to the page where a user can manage and potentially revoke all cookies and other data consent items.
- A well known location for browser permission management - site owners could offer a quick place for users to be able to revoke permissions to things like geo-location, notifications and other primitives.
- A well known path for account deletion and changes
- A well known path for mailing list subscription management
The list goes on…. I really like the idea for simple redirect files to help users to discover common user actions, and for a way for the browser to surface it up.
Update: I added an issue to Chrome to see if we can get a similar implementation.
pinch-zoom-element — ⭐
Jake and the team built this rather awesome custom element for managing pinch zooming on any set of HTML outside of the browser’s own pinch-zoom dynamics (think mobile viewport zooming). The element was one of the central components that we needed for the squoosh app that we built and released at Chrome Dev Summit (… I say ‘released at Chrome Dev Summit’ - Jake was showing it to everyone at the China Google Developer Day even though the rest of the team were under embargo ;) … )
install:
npm install --save-dev pinch-zoom-element
<pinch-zoom> <h1>Hello!</h1> </pinch-zoom>
I just added it to my blog (took just a couple of minutes), you can check it out on my ‘life’ section where I share photos that I have taken. If you are on a touch-enabled device you can quickly pinch-zoom on the element, if you are using a track-pad that can handle multiple finger inputs that works too.
This element is a great example of why I love web components as a model for creating user-interface components. The pinch-zoom
element is just under 3kb on the wire (uncompressed) and minimal dependencies for building and it just does one job exceptionally well, without tying any custom application-level logic that would make it hard to use (I have some thoughts on UI logic vs App logic components that I will share based on my learning’s from the Squoosh app).
I would love to see elements like these get more awareness and usage, for example I could imagine that this element could replace or standardise the image zoom functionality that you see on many commerce sites and forever take away that pain from developers.
Registering as a Share Target with the Web Share Target API — ⭐
Pete LePage introduces the Web Share Target API and the the availability in Chrome via an origin trial
Until now, only native apps could register as a share target. The Web Share Target API allows installed web apps to register with the underlying OS as a share target to receive shared content from either the Web Share API or system events, like the OS-level share button.
This API is a game changer on the web, it opens the web up to something that was only once available to native apps: Native Sharing. Apps are silos, they suck in all data and make it hard to be accessible across platforms. Share Target starts to level the playing field so that the web can play in the same game.
The Twitter Mobile experience has Share Target already enabled. This post was created using the Share Target I have defined in my sites ‘admin panel’ manifest.json - it works pretty well, and the minute they land file support I will be able to post any image or blob on my device to my blog.
Very exciting times.
Read the linked post to learn more about the time-lines for when this API should go live and how to use the API.
Why Build Progressive Web Apps: Push, but Don't be Pushy! Video Write-Up — ⭐
A great article and video and sample by Thomas Steiner on good push notifications on the web.
A particularly bad practice is to pop up the permission dialog on page load, without any context at all. Several high traffic sites have been caught doing this. To subscribe people to push notifications, you use the the PushManager interface. Now to be fair, this does not allow the developer to specify the context or the to-be-expected frequency of notifications. So where does this leave us?
Web Push is an amazingly powerful API, but it’s easy to abuse and annoy your users. The bad thing for your site is that if a user blocks notifications because you prompt without warning, then you don’t get the chance to ask again.
Treat your users with respect, Context is king for Web Push notifications.
Maybe Our Documentation "Best Practices" Aren''t Really Best Practices — ⭐
Kayce Basques, an awesome tech writer on our team wrote up a pretty amazing article about his experiences measuring how well existing documentation best-practices work for explaining technical material. Best practices in this sense can be well-known industry standards for technical writing, or it could be your own companies writing style guide. Check it out!
Recently I discovered that a supposed documentation “best practice” may not actually stand up to scrutiny when measured in the wild. I’m now on a mission to get a “was this page helpful?” feedback widget on every documentation page on the web. It’s not the end-all be-all solution, but it’s a start towards a more rigorous understanding of what actually makes our docs more helpful.
Whilst I am not a tech writer, my role involves a huge amount of engagement with our tech writing team as well as also publishing a lot of ‘best practices’ for developers myself. I was amazed by how much depth and research Kayce has done on the art of writing modern docs through the lens of our teams content. I fully encourage you to read Kayce’s article in-depth - I learnt a lot. Thank you Kayce!
Feature Policy & the Well-Lit Path for Web Development (Chrome Dev Summit 2018) — ⭐
Jason did an amazing talk about a little-known but new area of the web platform ‘Feature Policy’.
Feature Policy is a new primitive which allows developers to selectively enable, disable, and modify the behaviour of certain APIs and features in the browser. It’s like CSP, but for features & APIs! Teams can use new tools like Feature Policy and the Reporting API to catch errors before they grow out of control, ensure site performance stays high, keep code quality healthy, and help avoid the web’s biggest footguns.
Check out featurepolicy.rocks for more information about Feature Policy, code samples, and live demos.
Submit new ideas for policies or feedback on existing policies at → https://bit.ly/2B3gDEU.
To learn more about the Reporting API see https://bit.ly/rep-api.
Feature policy is an interesting area that can seem like a hard place to work out where.
There are a couple of areas where I seeing it being beneficial:
- Control 3rd-party content. As an embedder, you should be able to control what functionality runs in the context of your page and when it runs. Feature policy gives you that control. Don’t want iframes to autoplay video? Turn it off. Don’t want third party iframes to request geo-location? Turn it off. Don’t want iframes to access sensor information? Turn it off. You should be in control of your experience, not third party sites.
- Stay on target in development. We talked a lot at Chrome DevSummit about perf-budgets, yet today they can be hard to reason with. Feature Policy enabled on your development and staging services will help you know if any sets of changes you are making will breach your performance budgets by stopping you from doing the wrong thing. A case in point, our very own Chrome Dev Summit side had feature policy enabled for images called ‘max-downscaling-image’ - it inverts the colour of the image when it has been downscaled too much (a large image displayed in a small container). Feature policy picked it up and enabled us to make a decision about what to do. In the end, we disabled the policy because we were using the larger version of the image in multiple places and the images were already cached at that point.
I do encourage you all to look in to feature policy a lot more because it will play an important part of the future of the web. If you want to see the latest Policies that Chrome is implementing then checkout Feature Policy on Chrome Status
Photos from Chrome Chrome Dev Summit 2018 — ⭐
Some awesome photos from this years Chrome Dev Summit
I love this event ;)
I am like Wally - if you can find me, you get a sticker.
Chrome Dev Summit 2018 — ⭐
I am so excited! Tomorrow is the 6th Chrome Dev Summit and it’s all coming together.
Join us at the 6th Chrome Dev Summit to engage with Chrome engineers and leading web developers for a two-day exploration of modern web experiences.
We’ll be diving deep into what it means to build a fast, high quality web experience using modern web technologies and best practices, as well as looking at the new and exciting capabilities coming to the web platform.
I’m currently in the rehearsals and the tech check on the day before the event and it’s looking pretty good :) the talks are done, the MC’s are MC’ing, and the jokes are terrible :)
We’ve split the event into two distinct groups: Web of Today (Day 1); and thoughts on the Web of Tomorrow (Day 2).
The thinking I had behind this was that we want web developers to come away with a clear understanding of the what we (Chrome and Google) think is a good snapshot of modern web development and what we think the most important focuses businesses and developers should focus on for their users - Speed, UI, and Capability; and most importantly how to meet those objectives based on our learnings from working with a lot of developers over the last year.
The second day focus is interesting and it’s something new that we are trying this year. The intent of the day is to be a lot clearer about what we are starting to work on over the year ahead and where we need your feedback and help to make sure that we are building things that developers need - there should be a lot of deep dives in to new technologies that are being designed to fix common problems we all have building modern web experiences and a lot of opportunity to give our teams feedback so that we can help build a better web with the ecosystem.
I’m really looking forward to seeing everyone over the next two days.
Creating a simple boomerang effect video in javascript
Simple steps to create an instagram-like Video Boomerang effect on the web
Grep your git commit log
Finding code that was changed in a commit
Performance and Resilience: Stress-Testing Third Parties by CSS Wizardry — ⭐
I was in China a couple of weeks ago for the Google Developer Day and I was showing everyone my QRCode scanner, it was working great until I went offline. When the user was offline (or partially connected) the camera wouldn’t start, which meant that you couldn’t snap QR codes. It took me an age to work out what was happening, and it turns out I was mistakenly starting the camera in my onload
event and the Google Analytics request would hang and not resolve in a timely manner. It was this commit that fixed it.
Because these types of assets block rendering, the browser will not paint anything to the screen until they have been downloaded (and executed/parsed). If the service that provides the file is offline, then that’s a lot of time that the browser has to spend trying to access the file, and during that period the user is left potentially looking at a blank screen. After a certain period has elapsed, the browser will eventually timeout and display the page without the asset(s) in question. How long is that certain period of time?
It’s 1 minute and 20 seconds.
If you have any render-blocking, critical, third party assets hosted on an external domain, you run the risk of showing users a blank page for 1.3 minutes.
Below, you’ll see the DOMContentLoaded and Load events on a site that has a render-blocking script hosted elsewhere. The browser was completely held up for 78 seconds, showing nothing at all until it ended up timing out.
I encourage you to read the post because there is a lot of great insight.
Chrome Bug 897727 - MediaRecorder using Canvas.captureStream() fails for large canvas elements on Android — ⭐
At the weekend I was playing around with a Boomerang effect video encoder, you can kinda get it working in near real-time (I’ll explain later). I got it working on Chrome on Desktop, but it would never work properly on Chrome on Android. See the code here.
It looks like when you use captureStream()
on a <canvas>
that has a relatively large resolution (1280x720 in my case) the MediaRecorder API won’t be able to encode the videos and it won’t error and you can’t detect that it can’t encode the video ahead of time.
(1) Capture a large res video (from getUM 1280x720) to a buffer for later processing. (2) Create a MediaRecorder with a stream from a canvas element (via captureStream) sized to 1280x720 (3) For each frame captured putImageData on the canvas (4) For each frame call canvasTrack.requestFrame() at 60fps
context.putImageData(frame, 0, 0); canvasStreamTrack.requestFrame();
Demo: https://boomerang-video-chrome-on-android-bug.glitch.me/ Code: https://glitch.com/edit/#!/boomerang-video-chrome-on-android-bug?path=script.js:21:42
What is the expected result?
For the exact demo, I buffer the frames and then reverse them so you would see the video play forwards and backwards (it works on desktop). In generall I would expect all frames sent to the canvas to be processed by the MediaRecorder API - yet they are not.
What happens instead?
It only captures the stream from the canvas for a partial part of the video and then stops. It’s not predicatable where it will stop.
I suspect there is a limit with the MediaRecorder API and what resolution it can encode depending on the device, and there is no way to know about these limits ahead of time.
As far as I can tell this has never worked on Android. If you use https://boomerang-video-chrome-on-android-bug.glitch.me which has a 640x480 video frame it records just fine. The demo works at higher-resolution just fine on desktop.
If you want to play around with the demo that works on both then click here
Why Microsoft and Google love progressive web apps | Computerworld — ⭐
A nice post about PWA from Mike Elgan. I am not sure about Microsoft’s goal with PWA, but I think our’s is pretty simple: we want users to have access to content and functionality instantly and in a way they expect to be able to interact with it on their devices. The web should reach everyone across every connected device and a user should be able to access in their preferred modality, as an app if that’s how they expect it (mobile, maybe), or voice on an assistant etc.
We’re still a long way from the headless web, however, one thing really struck me in the article:
Another downside is that PWAs are highly isolated. So it’s hard and unlikely for different PWAs to share resources or data directly.
Sites and apps on the web are not supposed to be isolated, the web is linkable, indexable, ephemeral, but we are getting more siloed with each site we build. We are creating unintended silos because the platform doesn’t easily allow users to get their data in and out off sites easily. I’m not talking about RDF or anything like that, basic operations such as copy and paste, drag and drop, share to site and share from site are broken on the web of today, and that’s before we get to IPC between frames, workers and windows.
Building a video editor on the web. Part 0.1 - Screencast
You should be able to create and edit videos using just the web in the browser. It should be possible to provide a user-interface akin to Screenflow that lets you create an output video that combines multiple videos, images, and audio into one video that can be uploaded to services like YouTube. Following on from the my previous post that briefly describes the requirements of the video editor, in this post I just wanted to quickly show in a screencast how I built the web cam recorder, and also how how to build a screencast recorder :)
894556 - Multiple video tracks in a MediaStream are not reflected on the videoTracks object on the video element — ⭐
The first issue I have found trying to build a video editor on the web.
I have multiple video streams (desktop and web cam) and I wanted to be able to
toggle between the video streams on one video element so that I can quickly
switch between the web cam and the desktop and not break the MediaRecorder
.
It looks like you should be able to do it via toggling the selected
property
on the videoTracks
object on the <video>
element, but you can’t, the array
of tracks contains only 1 element (the first video track on the MediaStream).
What steps will reproduce the problem? (1) Get two MediaStreams with video tracks (2) Add them to a new MediaStream and attach as srcObject on a videoElement (3) Check the videoElement.videoTracks object and see there is only one track
Demo at https://multiple-tracks-bug.glitch.me/
What is the expected result? I would expect videoElement.videoTracks to have two elements.
What happens instead? It only has the first videoTrack that was added to the MediaStream.
Repro case.
window.onload = () => {
if('getDisplayMedia' in navigator) warning.style.display = 'none';
let blobs;
let blob;
let rec;
let stream;
let webcamStream;
let desktopStream;
captureBtn.onclick = async () => {
desktopStream = await navigator.getDisplayMedia({video:true});
webcamStream = await navigator.mediaDevices.getUserMedia({video: { height: 1080, width: 1920 }, audio: true});
// Always
let tracks = [...desktopStream.getTracks(), ... webcamStream.getTracks()]
console.log('Tracks to add to stream', tracks);
stream = new MediaStream(tracks);
console.log('Tracks on stream', stream.getTracks());
videoElement.srcObject = stream;
console.log('Tracks on video element that has stream', videoElement.videoTracks)
// I would expect the length to be 2 and not 1
};
};
Building a video editor on the web. Part 0.
You should be able to create and edit videos using just the web in the browser. It should be possible to provide a user-interface akin to Screenflow that lets you create an output video that combines multiple videos, images, and audio into one video that can be uploaded to services like YouTube. This post is really just a statement of intent. I am going to start the long process of working out what is and isn’t available on the platform and seeing how far we can get today.
Barcode detection in a Web Worker using Comlink — ⭐
I’m a big fan of QRCodes, they are very simple and neat way to exchange data between the real world and the digital world. For a few years now I’ve had a little side project called QRSnapper — well it’s had a few names, but this is the one I’ve settled on — that uses the getUserMedia
API to take live data from the user’s camera so that it can scan for QR Codes in near real time.
The goal of the app was to maintain 60fps in the UI and near instant detection of the QR Code, this meant that I had to put the detection code in to a Web Worker (pretty standard stuff). In this post I just wanted to quickly share how I used comlink to massively simplify the logic in the Worker.
qrclient.js
import * as Comlink from './comlink.js';
const proxy = Comlink.proxy(new Worker('/scripts/qrworker.js'));
export const decode = async function (context) {
try {
let canvas = context.canvas;
let width = canvas.width;
let height = canvas.height;
let imageData = context.getImageData(0, 0, width, height);
return await proxy.detectUrl(width, height, imageData);
} catch (err) {
console.log(err);
}
};
qrworker.js (web worker)
import * as Comlink from './comlink.js';
import {qrcode} from './qrcode.js';
// Use the native API's
let nativeDetector = async (width, height, imageData) => {
try {
let barcodeDetector = new BarcodeDetector();
let barcodes = await barcodeDetector.detect(imageData);
// return the first barcode.
if (barcodes.length > 0) {
return barcodes[0].rawValue;
}
} catch(err) {
detector = workerDetector;
}
};
// Use the polyfil
let workerDetector = async (width, height, imageData) => {
try {
return qrcode.decode(width, height, imageData);
} catch (err) {
// the library throws an excpetion when there are no qrcodes.
return;
}
}
let detectUrl = async (width, height, imageData) => {
return detector(width, height, imageData);
};
let detector = ('BarcodeDetector' in self) ? nativeDetector : workerDetector;
// Expose the API to the client pages.
Comlink.expose({detectUrl}, self);
I really love Comlink, I think it is a game changer of a library especially when it comes to creating idiomatic JavaScript that works across threads. Finally a neat thing here, is that the native Barcode detection API can be run inside a worker so all the logic is encapsulated away from the UI.