The sectioning elements in #HTML5 are
<body>is also kind of a sectioning element since all content lying inside of it is part of the default document section.
The sectioning elements in #HTML5 are
<body>is also kind of a sectioning element since all content lying inside of it is part of the default document section.
Google Docs Says Chromium-Based Microsoft Edge Is Not Supported
Trop fun la gueguerre....
When users of the Chromium-based Microsoft Edge use Google Docs, the service is stating that the browser is not supported. As the new Microsoft Edge uses the same HTML engine as Chrome and is clearly supported, some users feel that Google is playing unfairly.
Google Docs indicates that Edge is unsupported by displaying a notification at the top of the page that states “The version of the browser you are using is no longer support. Please upgrade to a supported browser.”. The message links to this Google support article.
This message is being shown because Google whitelists browsers based on their useragent strings, and for any that are not whitelisted, will display the above unsupported message.
A browser useragent is a unique string associated with a browser and used to identify it. When the browser connects to a website, the browser’s useragent string is also sent to the web site so that the site can offer customizations based on the browser.
For example, the useragent string for the new Microsoft Edge is:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3763.0 Safari/537.36 Edg/18.104.22.168
Even though the new Microsoft Edge is compatible with Google docs because it shares the same HTML engine as Google Chrome, since its useragent is not whitelisted, Google Docs will state it is unsupported.
To prove this, I installed the User-Agent Switcher for Chrome extension in Microsoft Edge and configured the browser to use the useragent of Chrome 74, which is shown below.
“Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.108 Safari/537.36”
Once I switched the useragent and refreshed Google Docs, the unsupported message was no longer shown.
Google can fix, but Microsoft can fix it faster
As there should not be any reason for Google to not whitelist the new Edge’s useragent string, how fast Google fixes this issue on their side will indicate whether they plan on playing fairly.
The good news is that Microsoft has a function built into the new Microsoft Edge that allows it switch its useragent as needed based on the domain being visited.
As we reported earlier this month, this is being done so that sites who whitelist or offer different features based on a browser’s useragent string will work properly in the new Microsoft Edge.
If Google decides to take their time resolving this problem, Microsoft can bypass them altogether and create a new rule for the docs.google.com URL and use Chrome’s useragent string so that this “unsupported” message goes away.
How to Turn #react Component into Native Web Component
How to Turn React Components into Native Web ComponentsStep-by-step instructions on how to wrap a React component inside a framework-agnostic HTML custom element; how to expose its properties and events, and support children transclusion.I have a side-project creating and maintaining a React component library called #dotnetify-Elements; a very specialized set of UI components that are capable of talking in real-time to a .NET Core back-end through web socket/SignalR.There’s been a few occasions where I would like to use them on static web pages or websites built with other UI framework. It’s possible, but it entails jumping through a few hoops to get React build system going, and sometimes that just may not be desirable.The Web Component standard, while not as versatile as React, at least (...)
Developer’s Pack: The #one Subscription
The ONE Subscription is a new service which offers thousands of pre-made products for websites building such as HTML templates and #wordpress themes. The ONE will provide you with literally everything one could possibly imagine for building websites. Moreover, this subscription is a pretty profitable service for the developers who build plenty of various websites and at the same time have to stay on a budget and do not want to to purchase all the items they need separately.So now let’s explore all the cons and pros of the ONE Subscription in order to help you finally decide if this service is good enough for you or not.What Exactly Is the ONE and Why You Might Need It?The ONE is a subscription service which offers over 8,500 various items for creating a countless amount of websites for a (...)
In this post, we’ll look at the new loading attribute which brings native
<iframe>lazy-loading to the web!. For the curious, here’s a sneak preview of it in action:
<img src="celebration.jpg" loading="lazy" alt="..." />
<iframe src="video-player.html" loading="lazy"></iframe>
We are hoping to ship support for loading in Chrome 75 and are working on a deep-dive of the feature we’ll publish soon. Until then, let’s dive into how loading works.
Overview of how does #css works behind the scenes?
FrontamentalsLet’s start by understanding what actually happens to our CSS code when we load up a web page in a browser.When a browser starts to load the initial #html file, it takes the loaded HTML code and parses it, which means that it will decode the code line by line. By this process, the browser builds the so-called DOM (Document Object Modal) which describes the entire web document in a tree with parent, children and sibling elements.HTML ParsingAs the browser parses the HTML, it also finds the stylesheets included in the HTML head and just like HTML, CSS is also parsed.But the parsing of CSS bit more complex.There are two main steps that are performed during the CSS parsing phase :1. Conflicting CSS declarations are resolved (also known a cascading)2. Process final CSS values (for (...)
#internet vs Blockchain Revolution: Are we in 1994? What to expect Next? (Part 5)
This article is part of the Internet vs Blockchain Revolution Series. If you are interested in reading the other articles, check out this post.Internet vs Blockchain Revolution: the evolution of the market, infrastructures, and companiesAre we in 1994?Interestingly, when Marc Andreessen, the founder of Netscape, found himself in Silicon Valley in early 1994, he thought that he was too late and missed the whole thing as the short recession of 1990–1991 hit the #technology industry hard. The current stage of blockchain and #cryptocurrency development is most analogous to the Internet Revolution in 1994, in which we have invented TCP/IP, HTML, and FTP, and out of these will lead to the development of Netscape (1994) and much later Facebook (2004), and Airbnb (2008). In blockchain, we are (...)
HTML is the language for creating Web pages and the HTML elements are actually blocks that are used to build a page. Learn the basics to become a front-end professional.(This article was initially posted on ▻https://www.developermate.com)HTML stands for HyperText Markup Language. The first web pages was released in 1990 and the pages were only used for presentation. Today the web is a very important piece in our daily lives. You may use many different web browsers to look at web-pages, like: Google Chrome, Opera, Internet Explorer and Firefox.HTML Page StructureIt is important to understand the HTML page structure.The <head> is used for the title and meta-tags. Only content inside the <body> section (the white area) is displayed by the web browser.HTML TagsHTML tags are (...)
Why Should You Use #angularjs?: Key Features And Reasons
Planning to Switch Site From Html to Gatsby?
Here’s what I learned when I built my #portfolio with Gatsby and ReactJSThe days of WordPress days are all gone now. Even after being a backend developer for 3 years, playing around with PHP still haunts me. I think this is true for most of the Python developers out there. Also, who would want a heavy site for a single page portfolio right?Why I Hate Wordpress?Well, Duh… It’s PHPI moved my blogs to medium, hence all I wanted was a lightweight single page site.Now that the blogs are moved to medium, I no longer have to host my site anywhere, meaning I don’t have to pay a server.At a point, the amount of time I spent with figuring out a plugin, and a template was so much that I decided I would be well off writing things on my own.Why Gatsby?Gatsby is a #react-based, GraphQL powered static site (...)
How to make a simple Machine Learning #website from scratch
From #wordpress to JAMstack : How to make your website 10x faster
This article will brief you about the #css Box Model.What is The Box Model?Every element in the #html document comprised of one or more rectangular boxes. CSS box model describes how these rectangular boxes are laid out on a web page. The Box Model describes how the padding, border, and margin are added to the content to create this rectangle. In other words, we can say that every box has a content area and an optional surrounding margin, padding, and border.by republic.comThe innermost rectangle is the content box. The width and height of this depend on the element’s content (text, images, videos and any child elements ).Then we have the padding box (defined by the padding property). If there is no padding width defined, the padding edge is equal to the content edge.Next, the border box (...)
The “Backendification” of Frontend Development
Admiring The Poetry Of Code
Background by UnsplashThis is a continuation of my previous guide. Refer it to learn how to publish code to npmThis article focuses on publishing code for use in a browser.There will be 2 sections to this guide :Publishing browser-native code ?Converting an npm module for use in a browser ?For both cases, we will eventually deploy the code to npm to leverage the power of the free CDN unpkg1. Publishing browser-native code ?This one is easier. As the code is already in browser-usable format, it just needs to be included in the HTML code via the <script> tags.This will make its variables and functions available to the browser automatically.To publish it to a CDN :Use npm init -y to initialize a package.json for the repository.Publish it to npm via the steps given here : Steps to (...)
Multiple Ways to Build a Banner Generation Tool with #phantomjs
Author: Yurii VlasiukToday we want to share our experience of implementing the backend tool for the generation of graphical #banners from HTML templates. #webbylab’s customer was interested in automation of advertisement banners customisation based on the prepared templates. Basically, you have an index.html with fields where you can insert different components e.g. buttons, images or text. So I will tell about approaches we’ve used to implement such functionality. For our project we chose PhantomJS. You may argue that there are more innovative solutions in the npm repository. But when we started, none of these libraries was stable or well documented yet (Puppeteer, for example, — a Node.js library that provides a high-level API to control headless Chromium-based engines; among its most (...)
5 Free Sites to Learn to Code
Hosting a Free Static Website on #google Cloud Storage
This guide walks you through setting up a free bucket to serve a static website through a custom domain name using Google Cloud Platform services.Sign in to Google Cloud Platform, navigate to Cloud DNS service and create a new public DNS zone:By default it will have a NS (Nameserver) and a SOA (Start of Authority) records:Go to you domain registrar, in my case I purchased a domain name from GoDaddy (super cheap). Add the nameserver names that were listed in your NS record:PS: It can take some time for the changes on GoDaddy to propagate through to Google Cloud DNS.Next, verify you own the domain name using the Open Search Console. Many methods are available (HTML Meta data, Google Analytics, etc). The easiest one is DNS verification through a TXT record:Add the TXT record to your DNS zone (...)
How to use #gatsby with a Headless #cms
The Rise and Demise of RSS
Before the internet was consolidated into centralized information silos, RSS imagined a better way to let users control their online personas.
The story of how this happened is really two stories. The first is a story about a broad vision for the web’s future that never quite came to fruition. The second is a story about how a collaborative effort to improve a popular standard devolved into one of the most contentious forks in the history of open-source software development.
RSS was one of the standards that promised to deliver this syndicated future. To Werbach, RSS was “the leading example of a lightweight syndication protocol.” Another contemporaneous article called RSS the first protocol to realize the potential of Extensible Markup Language (XML), a general-purpose markup language similar to HTML that had recently been developed. It was going to be a way for both users and content aggregators to create their own customized channels out of everything the web had to offer. And yet, two decades later, after the rise of social media and Google’s decision to shut down Google Reader, RSS appears to be a slowly dying technology, now used chiefly by podcasters, programmers with tech blogs, and the occasional journalist. Though of course some people really do still rely on RSS readers, stubbornly adding an RSS feed to your blog, even in 2019, is a political statement. That little tangerine bubble has become a wistful symbol of defiance against a centralized web increasingly controlled by a handful of corporations, a web that hardly resembles the syndicated web of Werbach’s imagining.
RSS would fork again in 2003, when several developers frustrated with the bickering in the RSS community sought to create an entirely new format. These developers created Atom, a format that did away with RDF but embraced XML namespaces. Atom would eventually be specified by a standard submitted to the Internet Engineering Task Force, the organization responsible for establishing and promoting the internet’s rules of the road. After the introduction of Atom, there were three competing versions of RSS: Winer’s RSS 0.92 (updated to RSS 2.0 in 2002 and renamed “Really Simple Syndication”), the RSS-DEV Working Group’s RSS 1.0, and Atom. Today we mostly use RSS 2.0 and Atom.
For a while, before a third of the planet had signed up for Facebook, RSS was simply how many people stayed abreast of news on the internet.
Today, RSS is not dead. But neither is it anywhere near as popular as it once was. Lots of people have offered explanations for why RSS lost its broad appeal. Perhaps the most persuasive explanation is exactly the one offered by Gillmor in 2009. Social networks, just like RSS, provide a feed featuring all the latest news on the internet. Social networks took over from RSS because they were simply better feeds. They also provide more benefits to the companies that own them. Some people have accused Google, for example, of shutting down Google Reader in order to encourage people to use Google+.
RSS might have been able to overcome some of these limitations if it had been further developed. Maybe RSS could have been extended somehow so that friends subscribed to the same channel could syndicate their thoughts about an article to each other. Maybe browser support could have been improved. But whereas a company like Facebook was able to “move fast and break things,” the RSS developer community was stuck trying to achieve consensus. When they failed to agree on a single standard, effort that could have gone into improving RSS was instead squandered on duplicating work that had already been done. Davis told me, for example, that Atom would not have been necessary if the members of the Syndication mailing list had been able to compromise and collaborate, and “all that cleanup work could have been put into RSS to strengthen it.” So if we are asking ourselves why RSS is no longer popular, a good first-order explanation is that social networks supplanted it. If we ask ourselves why social networks were able to supplant it, then the answer may be that the people trying to make RSS succeed faced a problem much harder than, say, building Facebook. As Dornfest wrote to the Syndication mailing list at one point, “currently it’s the politics far more than the serialization that’s far from simple.”
J’apprécie, comme toi, qu’il fasse remarquer que les décisions
techniques ont des conséquences politiques. Il est clair que l’abandon de facto de la #syndication SS a accéléré le passage d’un web décentralisé vers un web polarisé par les GAFA. Je suis moins convaincu par ses explications sur les raisons pour lesquelles la syndication n’a pas tenu sur le long terme :
– dire que RSS n’est pas user-friendly est franchement débile. RSS est un format. L’utilisateur ne le voit pas. Quasiment aucun utilisateur
de RSS, que ce soit côté producteur ou consommateur, n’a regardé à quoi ça ressemblait en utilisant vi ! Un logiciel peut être
« user-friendly » ou pas. Pour un format, ça n’a pas de sens.
– je trouve qu’il exagère le rôle des disputes au sein du monde de la
syndication. Certes, ces disputes ont pu contribuer à semer le trouble mais n’exagérons pas : ça se passait dans un tout petit microcosme et la grande majorité des webmestres et des lecteurs n’en ont jamais entendu parler. (Au passage, le camp vainqueur est nettement celui qui voulait un format simple : les sites Web n’utilisent qu’une petite partie du format.) Et, d’une point de vue pratique, ces disputes n’ont eu aucune conséquence : tous les logiciels de lecture comprennent les trois formats. Le webmestre peut donc publier ce qu’il veut, sans inquiétude.
– par contre, il parle trop peu des raisons politico-marketing de
l’abandon de la syndication : propagande effrénée des médias et
autres autorités en faveur des solutions centralisées, notamment.