Python, Javascript and UNIX hacker, open source advocate, IRC addict, general badass and traveler
405 stories
·
7 followers

Ubuntu Blog: History of Open Source Identity Management (part 2)

1 Share

This is the second blog post (part 1 available here) where we look at the history of open source identity management. This post focuses on Oauth and OpenID, the protocols currently used in modern applications and services.

This post does not cover the technical details of the open source identity management standards, which are explained very well in this Okta blog post. Rather, it explains the origins of Oauth and OpenID, and provides insights on the context that led to their creation. 

Towards modern open source identity management

As we wrote in the previous article, in the late 1990s and early 2000s, identity management was widely believed to be a solved problem. However, two computing trends quickly challenged that common belief:

  • REST – In 2000, Roy Fielding led the basis for REST APIs in his dissertationArchitectural Styles and the Design of Network-based Software Architectures. It provided one of the key architectural patterns still used to this date.
  • Mobile phones – The popularity of Blackberry, Windows Mobile, Palm, Symbian and later iOS and Android was increasing year by year, forever changing the way people consume information.

In both cases, the existing identity and access management frameworks were unfit, and required relatively complex implementations with high operational costs. This led to a number of new standards being developed by small and large companies alike, and eventually  the two described below.

OpenID

The OpenID protocol was initially developed in 2005 by Brian Fizpatrick under the name of Yadis (the original post is still available here) while working on the Livejournal website. The protocol was born as a way to give users the possibility to be authenticated by cooperating sites (called relying parties) using a single internal or external identity provider, effectively eliminating the need to have separate credentials across different sites.

From the early days, OpenID grew rapidly, thanks to the contributions of many corporations and the community. The most notable early contributions were made by JanRain, which developed the original OpenID libraries in many programming languages, and NetMesh, which added the Extensible Resource Descriptor Sequence (XRDS) format initially used in their Light-weight Identity (LID) protocol.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/YBYoPH4S23qvkbx0X2GU6yZm63tlkj5yTy-9J4W7xIboD-jb_IhmAHC74V9IquDOlxbUHLwoIXxZ3XugngWe1ih6zaKWY6teNGVvXOtd3raBWWaJ_FxTuVyyl68097iRlR-7F85L" width="720" /> </noscript>
OpenID logo

At the time, OpenID was competing with Microsoft’s CardSpace and the Eclipse Foundation’s Higgins Trust Framework. However, OpenID gained significant traction, thanks to the endorsement received by the major tech giants, including Microsoft, which saw in it a way to help their users easily reuse their identities across the internet. 

The meteoric adoption of OpenID and the growth of its community prompted the creation of a formal governance body to promote its adoption and manage its evolution. In May 2007, the OpenID foundation was formed with the mission of promoting and enhancing OpenID technologies and standards. The organisation is still active, and its members include some of the most notable names in tech, telecommunication, and professional services. 

OAuth

OAuth was started in 2006, and its conception is closely tied to OpenID. When Blaine Cook (one of the co-authors of the original OAuth spec) was implementing OpenID for Twitter API, he realised that there wasn’t a single effective way of providing secure delegated access to it. 

In that period, both the number and popularity of social media websites was growing exponentially, and with them, the need to “connect users with their friends”. In order to enable that, social media sites were usually asking users to share their login credentials to other services they were already using (e.g., their email provider) to see which contacts were already on the platform and send invitations to the ones that were not. This meant users had to make a tradeoff between their security and the thrill of finding new friends.

<noscript> <img alt="" height="395" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_535,h_395/https://lh5.googleusercontent.com/b7kVEQ3xAppovcn5czsXCGFqXDBXXJyWWpaO9S3OaiTKf5-xwprCbHvTU7AILmom2-0FFtz1jpmJwRu1LKXcSisyqPZBt_-5UTk4SLnJtWI-4CkHJ7MCxrLv_JYwt8_799YYSu3a" width="535" /> </noscript>
Find Your Friends page in an old Facebook signup flow

OAuth was not supposed to be a new protocol, but rather a standardisation centered around bringing the best aspects provided by the other protocols in use at the time (such as Yahoo BBAuth, Google AuthSub, AWS API, etc.) under the same umbrella. Each one of those provided a semi-customised way of exchanging user credentials for an access token that was primarily targeted at website services. 

The other major innovation introduced by OAuth was its ability to operate well on traditional websites as well as native mobile applications and connected devices. Version 1 of the spec was released in 2010, and version 2 (currently in use) in 2012 with RFC6749 and RFC6750.

While OAuth and OpenID flows share the same actor names and some architectural components, they are complementary standards that serve different purposes. OpenID was intended as a way to use a single identity to access many sites, while OAuth allowed users to provide access to some of their private resources to different sites without sharing their authentication credentials. 

OpenID Connect (OIDC)

Given the success and rapid adoption of OAuth, many developers decided to “hack it” to use it as a way to perform authentication and pass identity information. However, this presented several security challenges. These challenges, coupled with evolving architectural needs, prompted the creation of the third version of OpenID, called OpenID Connect (or OIDC).

OIDC is an identity layer built on top of the OAuth flow. It was officially released in 2014, and since then it has become the de facto authentication standard in both the consumer and enterprise space with several billions of OIDC enabled identities in both the consumer and enterprise markets.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/Efngs8fInq2RUvgcK7qYonqhOzh6p41Vs9PhRDhqQ-SNs2H89CFX7yeiz44jhwq7v7vyOuNXJ7-wuyrIY_UevYuIswuz464nULdHmRhUKtfh3BOGUvPihhPhKb_A-fSDmNpZL_A0" width="720" /> </noscript>
OpenID Connect logo

Despite OpenID Connect and OpenID sharing the same name, they are two very different standards with different parameters and response body formats. OIDC combines the features of OpenID 2.0, OpenID Attribute Exchange 1.0, and OAuth 2.0, allowing an application to use an external authoritative service to:

  1. Verify the user identity
  2. Get the user profile information in a standardised format
  3. Request approval and gain access to the required subset of users attributes

Nowadays, OIDC is not only the internet identity layer, but also the standard with which we provide secure access to our financial information online via the Financial Grade API (FAPI), with a very active working group shaping the future of finance.

The future looks bright for identity management, and at Canonical we work hard to support the standard and its flows in our suite of products.

Read the whole story
miohtama
9 days ago
reply
Helsinki, Finland
Share this story
Delete

Chris Achilleos’s 1981 film poster art for ‘Heavy Metal’

1 Share


Chris Achilleos’s 1981 film poster art for ‘Heavy Metal’

Read the whole story
miohtama
73 days ago
reply
Helsinki, Finland
Share this story
Delete

Welcome Home

1 Share


Welcome Home

Read the whole story
miohtama
98 days ago
reply
Helsinki, Finland
Share this story
Delete

How to Forecast Time Series Data Using Deep Learning

1 Share

An autoregressive recurrent neural net developed at Amazon

Time series (TS) forecasting is notoriously finicky. That is, until now.

DeepAR Machine Learning Time Series Deep Learning Recurrent Neural Net Long Short Term Memory Likelihood
Figure 1: DeepAR trained output based on this tutorial. Image by author.

In 2019, Amazon’s research team developed a deep learning method called DeepAR that exhibits a ~15% accuracy boost relative to state-of-the-art TS forecasting models. It’s robust out-of-the-box and can learn from many different time series’, so if you have lots of choppy data, DeepAR could be an effective solution.

From an implementation perspective, DeepAR is more computationally complex than other TS methods. It also requires more data than traditional TS forecasting methods such as ARIMA or Facebook’s prophet.

That being said, if you have lots of complex data and need a very accurate forecast, DeepAR is arguably the most robust solution.

Technical TLDR

In short, DeepAR is an LSTM RNN with some bells and whistles to improve accuracy on complex data. There are 4 main advantages to DeepAR relative to traditional TS forecasting methods…

  1. DeepAR is effective at learning seasonal dependencies with minimal tuning. This out-of-the-box performance makes the model a good jumping-off point for TS forecasting.
  2. DeepAR can use covariates with little training history. By leveraging similar observations and weighted resampling techniques, the model can effectively determine how an infrequent covariate would behave.
  3. DeepAR makes probabilistic forecasts. These probabilities, in the form of Monte Carlo samples, can be used to develop quantile forecasts.
  4. DeepAR supports a wide range of likelihood functions. If your dependent variable takes on a non-normal or non-continuous distribution, you can specify the relevant likelihood function.

The model has received lots of attention and is currently supported in PyTorch. Tutorials and code are linked in the comments.

But, what’s actually going on?

Ok let’s slow down a bit and discuss how Amazon’s DeepAR model actually works…

Traditional Time Series Forecasting

Let’s start at square one.

As noted above, time series forecasting is notoriously difficult for two main reasons. The first is that most time series models require lots of subject matter knowledge. If you’re modeling a stock price with a traditional TS model, it’s important to know what covariates impact price, whether there’s a delay in the impact of those covariates, if price exhibits seasonal trends, etc.

Often engineers lack the subject matter knowledge required to create effective features.

The second reason is that TS modeling is a pretty niche skillset. Prior outputs of a given timestep are the inputs to the next timestep, so we can’t use the usual modeling techniques or evaluation criteria.

So, not only do engineers need in-depth knowledge of the data, they also must have a strong understanding of TS modeling techniques.

Traditional Recurrent Neural Nets (RNNs)

Machine learning can provide alternatives to traditional TS forecasts that are often more accurate and easier to build. The simplest ML algorithm that supports sequential data are recurrent neural nets.

RNNs are essentially a bunch of neural nets stacked on top of each other. The output of the model at h1 feeds into the next model at h2 as shown in figure 2.

DeepAR Machine Learning Time Series Deep Learning Recurrent Neural Net Long Short Term Memory Likelihood
Figure 2: graphical representation of a recurrent neural net. Image by author.

Here, x’s in blue are predictor variables, h’s in yellow are hidden layers, and y’s in green are predicted values. This model architecture automatically handles many of the challenges we discussed above.

But even RNNs fall short in certain areas. First, they’re overly simplistic in their assumptions about what should be passed to the next hidden layer. More advanced components of a RNN such as Long Short Term Memory (LSTM) and Gate Recurring Units (GRU) layers provide filters for what information get’s passed down the chain. And they often provide better forecasts than vanilla RNNs, but sadly they can fall short too.

Amazon’s DeepAR model is the most recent iteration on LSTM and solves two of its key shortcomings:

  1. Outliers are fit poorly due to a uniform sampling distribution. One of the strengths of DeepAR is that it aggregates information from many time series’ to develop a prediction for a single unit e.g. a user. If all units have an equal chance of being sampled, outliers are smoothed over and our forecasted values become less extreme (and probably less accurate).
  2. RNN’s don’t handle temporal scaling well. Over time, time series data often exhibit overall trends — just think about your favorite (or least favorite) stock during COVID. These temporal trends make fitting our model more difficult because it has to remove this trend when fitting, then add it back to our model output. That’s a lot of unnecessary work.

DeepAR solves both of these problems.

How does DeepAR Work?

Building on RNN architecture, DeepAR uses LSTM cells to fit our predictor variables to our variable of interest. Here’s how…

Sequence to Sequence Encoder-Decoder

First, to make our data more usable we leverage a sequence to sequence encoder-decoder. This method takes a set of n inputs, encodes those inputs with a neural net, then outputs m decoded values.

Fun fact — they’re the backbone of all language translation algorithms, such as Google Translate. So, using figure 3 below, let’s think about this method through the lens of translating english to Spanish.

DeepAR Machine Learning Time Series Deep Learning Recurrent Neural Net Long Short Term Memory Likelihood
Figure 3: graphical representation of a sequence to sequence encoder-decoder. Image by author.

Here, each of the x_n values in blue are inputs to our model i.e. english words. They are sequentially fed into an RNN which encodes their information and outputs it as an encoder vector (in yellow). The information the encoder vector is represented as a weight vector for our hidden state. From there, our encoder vector is fed into our decoder, which is a RNN. The final output, labeled y_n in green, are the Spanish words.

Pretty cool, right?

Scale According to our Data

Second, we tackle the two problems that make basic LSTMs inferior to DeepAR: uniform sampling and temporal scaling.

Both of these are handled using a scaling factor. The scaling factor is a user-specified hyperparameter but the recommended value is simply the average value of a given time series:

DeepAR Machine Learning Time Series Deep Learning Recurrent Neural Net Long Short Term Memory Likelihood
Figure 4: formula for the default scaling factor used in DeepAR — the average value of an individual time series. Image by author.

To handle the uneven sampling of extreme values, we make the sampling probability proportional to v_i. So, if the average value for a given time series is high, it’s more likely to be sampled and visa versa.

To handle temporal scaling, we divide the inputs at each LSTM cell by v_i. By scaling our values down before inputting them into each cell, our model can focus on the relationships between our predictor and outcome variables instead of fitting the time trend. After the cell has fit our data we multiply its output by v_i to return our data to its original scale.

Less cool but still pretty cool. Just one more section to go…

Fit using a Likelihood Function

Third and finally, with all of our data intricacies handled, we fit by maximizing the conditional probability of our dependent variable given our predictors and model parameters (figure 5). This estimator is called the Maximum Likelihood Estimator (MLE).

DeepAR Machine Learning Time Series Deep Learning Recurrent Neural Net Long Short Term Memory Likelihood
Figure 5: simplified DeepAR likelihood function. Image by author.

The simplified expression above is what we are looking to maximize — we want to find the model parameters (θ) that maximize the probability of our dependent variable (z). We also condition on our covariates (x_t) and the output of our prior node (z_t-1).

So, given our covariates and the predicted value at the prior timestep, we find the parameter values that maximize the likelihood of making the observations given the parameters.

Now the likelihood also includes a data transformation step, but from a comprehension and implementation standpoint, it’s not super important. If you’re curious, check out the paper linked in the comments.

And there you have it — DeepAR in a nutshell. Before you go here are some practical tips on implementing the method…

Implementation Notes

  • The authors suggest standardizing covariates i.e. subtract the mean and divide by the standard deviation.
  • Missing data can be a big problem for DeepAR. The author suggest imputing the missing data by sampling from the conditional predictive distribution.
  • For the example cited in the paper, the authors created covariates that correspond to temporal information. Some examples were age (in days) and day-of-week.
  • Optimizing parameters using a grid search is an effective way to tune the model. However, learning rate and encoder/decoder length are subject-specific and should be tuned manually.
  • To ensure the model is not fitting based on the index of our dependent variable in the TS, the authors suggest training the model on “empty” data prior our start period. Just use a bunch of zeros as our dependent variable.

Thanks for reading! I’ll be writing 41 more posts that bring “academic” research to the DS industry. Check out my comments for links/tutorials for building a DeepAR model.


How to Forecast Time Series Data Using Deep Learning was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
miohtama
119 days ago
reply
Helsinki, Finland
Share this story
Delete

The art of Campari

1 Share

This is something I wrote a few years ago for the Daily Telegraph when Campari put on an exhibition of its posters in London.

You’d never get some the images in the forthcoming exhibition of classic Campari posters past the Advertising Standards Agency. There are figures who might appeal to children, adults who appear to be under 25 and, most shocking of all, images linking the consumption of Campari to seduction. Thankfully there were no such strictures in 20th century Italy when Campari commissioned a series of posters that blurred the line between art and advertising.

Campari has its origins in the heart of industrial Italy, Milan. In 1861, a cafe owner Gaspare Campari created a blend of 68 botanicals, neutral alcohol, water and sugar. That striking red colour came from cochineal beetles. A manufactured product, not governed by the whims of nature like wine nor weighed down by tradition, it was the perfect drink for a young country, Italy had only been unified in 1861, and for mass advertising. It was Gaspare’s son Davide who set about selling his drink first across Italy and then the world. 

The early advertising campaigns linked drinking Campari with glamour and sophistication. A 1913 poster by Marcello Dudovich shows a group of Edwardian ladies in elaborate pastel dresses and hats. It’s a classic image of Belle Epoque bourgeois contentment though one of the men standing with them is wearing a military uniform with sword which gives the poster a melancholy edge when one thinks what would happen the following year.  

Whereas the pre-war posters are fairly conventional, after the war, Campari advertising became decidedly avant garde. Futurism, the Italian artistic movement based on speed and modernity, embraced advertising wholeheartedly. The painter Giacomo Balla wrote: “any store in a modern town, with its elegant windows all displaying useful and pleasing objects, is much more aesthetically enjoyable than […] the grimy little pictures nailed on the grey wall of the passéist painter’s studio.” In the modern world people were not going to have time to stop in museums and look at pictures but will look at art on the street.  

This union of commerce and cutting edge art found its most playful exponent in Fortunato Depero (above). He wrote “the art of the future will be largely advertising” and in 1926 he began his long relationship with Campari. His style is instantly recognisable: monotone abstract images, tribal motifs, slogans and stylised figures collide  in a way that looks like early Russian revolutionary art but with a sense of humour. Most striking of all is his 1931 design for a pavilion, which was never built, where the entire structure is built out of the word Campari. Depero sealed his place in Italian culture with his design for the triangular premixed Campari and soda, the Italian equivalent of the Coca-Cola bottle. 

Campari’s advertising embraced others artistic styles: there was surrealism in the posters of Leonardo Cappiello, sinister-looking clowns jumping through hoops of orange, the dreamlike silhouettes of Ugo Mochi or my own personal favourites, cubist still lifes by Marcello Nizzoli where the Campari bottle takes centre stage (below). This was truly a melding of fine art and commerce.

All this took place under Mussolini’s fascist regime. Initially his vision chimed with the Futurists but following the 1929 Lateran treaty with the pope, Mussolini wanted art to show a Catholic, agrarian and family-orientated Italy. A similar reaction against the avant garde happened under Stalin but whereas Soviet propaganda art of the same period became kitsch, Campari cheerfully ignored Il Duce’s edicts and the adverts continued as before. Advertisers had more artistic freedom in fascist Italy than in modern day Britain.

After the war the adverts change, it’s out with modernism and in with pop art reflecting the optimism of Italy’s post-war boom. It’s advertising for the Fiat 500 generation. For me this part of the exhibition is less satisfying perhaps because the pop art style is already so soaked in advertising. Still there are some great images: a gamine Audrey Hepburn-esque figure (below – much too sexy for the ASA), a quirky image of Depero’s bottle with running legs (these first two by by Franz Marangolo), and a typographical poster that plays with the recognisability of the Campari brand. This last image by Bruno Munari is made of different fonts like a ransom note cut out of a newspaper. Again there’s dramatic irony here has it prefigures the political violence and kidnapping of the 1970s anni di piombo (years of lead) that would mark the end of Italy’s sunny postwar age. 

The golden age of poster advertising too came to the end at a similar time with the rise of television. Posters were now part of a larger multi media campaigns though Campari still aimed for the top: Federico Fellini directed a 1984 television advert. This exhibition celebrates a special moment in advertising history, a time when commercial art could be confident, joyful and beautiful. And effective too, aren’t you now craving the distinctive bittersweet taste of Campari? I know I am.



Read the whole story
miohtama
124 days ago
reply
Helsinki, Finland
Share this story
Delete

You need to watch the best cheesy sci-fi movie of 1986 for free online ASAP

1 Share

It would probably be a stretch to call Jim Wynorski’s movies “good.” But the prolific B-movie filmmaker has found a way to stay relevant within the ever-changing landscape of low-budget exploitation movies, honing in on the key elements of various disreputable subgenres.

Early in his career, Wynorksi directed lower-profile sequels to movies that weren’t exactly beloved to begin with (Deathstalker II, Sorority House Massacre II, 976-EVIL II). Later, he worked in “erotic” thrillers (Sins of Desire, Victim of Desire, Virtual Desire), action movies (Stealth Fighter, Extreme Limits), softcore sex parodies (The Bare Wench Project, The Witches of Breastwick), and dog-themed family movies (A Doggone Christmas, A Doggone Hollywood), among many, many others. He’s also responsible for one of the greatest shark-movie title puns of all time (Sharkansas Women’s Prison Massacre).

But Wynorski’s greatest artistic achievement came when he was just starting out, working under the tutelage of legendary B-movie producer Roger Corman. Wynorski’s second feature as a director is the rare B-movie that lives up to the ridiculousness of its title, and must be seen to be believed. Good news: you can see it online for free, right now.

Is 1986’s Chopping Mall “good”? Maybe not, but it fully accomplishes its goals, delivering everything you could reasonably want from a movie about mall security robots run amok.

Wynorski and co-writer Steve Mitchell bring a self-aware, self-deprecating sense of humor to the movie, which is full of tributes to the rich history of B-movies (and to Corman’s work in particular). Produced by Corman’s wife Julie, Chopping Mall is set in a mall with stores that include Peckinpah’s Sporting Goods and Roger’s Little Shop of Pets, references to director Sam Peckinpah and Corman himself.

Chopping Mall opens with a presentation about the so-called Protectors, security robots that look like a cross between Short Circuit’s Johnny 5 and RoboCop’s ED-209.

In attendance at that presentation are murderous restaurant owners Paul and Mary Bland (played by Paul Bartel and Mary Woronov), the main characters from Bartel’s 1982 cult classic dark comedy Eating Raoul. The Blands’ snarky running commentary on the corporate pitch for the Protectors establishes the movie’s deadpan tone, while still providing crucial exposition about Protectors’ deadly capabilities.

“They remind me of your mother,” Paul Bland notes to his wife. “It’s the laser eyes.”

Meanwhile, a company scientist assures the audience, “Absolutely nothing can go wrong,” which is a guarantee that soon, absolutely everything will go wrong.

That’s unfortunate for the requisite group of horny young people who’ve gathered in the mall after closing for a party in a furniture store (you know, as young people tend to do). Thanks to a freak lightning strike on the mall’s electrical system, the Protectors have been activated and set to kill (the movie’s original title was Killbots).

Their first target? The security technicians supposedly monitoring them. One oblivious technician is reading a dirty magazine, while the other is reading They Came From Outer Space, a book of sci-fi stories that were later adapted into movies (edited, of course, by Jim Wynorski). That combination sums up the appeal of Chopping Mall, which offers plenty of gratuitous nudity before getting to the robot killing spree.

Wynorski doesn’t just show off the bodies of his female stars, though. The main character is assertive, independent pizza parlor employee Alison (Kelli Maroney), who isn’t convinced about the party or the potential blind date set up by her friend and co-worker Suzie (Barbara Crampton). Alison and Suzie mock the boorish male customers who treat them like pieces of meat (“It is ‘babe,’ isn’t it?” is their impression of an entitled man talking down to them), and when it comes time to fighting killbots, Alison proves the most resourceful of the bunch.

Her blind date is the shy Ferdy (Tony O’Dell), who’s honorable but still not as accurate with a rifle as Alison is. “Dad’s a Marine,” she shrugs after being praised for a perfect shot, just like her similarly perky but deadly character in 1984’s Night of the Comet.

Before Alison gets to shoot at some killbots, though, her slightly less responsible peers get their throats slit and their bodies blown to bits (among other injuries), all while the Protectors intone “Have a nice day.” Anyone who isn’t sure what they were getting into with a movie called Chopping Mall will fully understand once the Protectors’ lasers explode one tragic mallgoer’s head.

Wynorski maintains the campy tone throughout, but he also generates some genuine suspense from the menacing Protectors as they stalk the characters through various stores in the mall. The robots cut through metal doors, ride elevators, and even mess with the climate control system as the characters attempt to escape via the vents.

“They’re trying to French fry us!” Suzie cries out, as the cramped ducts heat up.

At 77 minutes, Chopping Mall has very few lulls and never outstays its welcome. Wynorski understands his mandate perfectly, and he creates a movie that his mentor Corman can be proud of. “I guess I’m just not used to being chased around a mall in the middle of the night by killer robots,” laments Linda (Karrie Emerson) as the situation looks grim late in the movie. But for a B-movie workhorse like Wynorski, that’s just another day at the office.

Chopping Mall is streaming for free on Pluto TV, Popcornflix, Shout Factory TV, Tubi, Vudu and Hoopla in the U.S.

Read the whole story
miohtama
221 days ago
reply
Helsinki, Finland
Share this story
Delete
Next Page of Stories