Blog

  • SpecFlow.Gherkin

    SpecFlow.Gherkin

    Visual Studio Extensions:

    	SpecFlow for Visual Studio 2019
    

    CustomerRegistration.feature

    Feature: Manage Basic Customer Registration
    
    @CustomerRegistration
    Scenario: Register Name and Last Name
    	Given I have entered Name Jessé into the form
    	And I have entered Last Name Toledo into the form
    	When I press add
    	Then the result should be the Full Name registered
    
    @CustomerRegistration
    Scenario: Unregistered Name and Last Name
    	Given I have entered Name <Name> into the form
    	And I have entered Last Name <LastName> into the form
    	When I press add
    	Then the result should be the Full Name unregistered
    
    	Examples:
    		| Name   | LastName |
    		| <null> | Toledo   |
    		| Jessé  | <null>   |
    		| <null> | <null>   |
    

    CustomerRegistrationStep.cs

    	 [Binding]
        public sealed class CustomerRegistrationStep
        {
            private readonly Customer _customer;
            private readonly HttpClient _client;
            private HttpResponseMessage _response;
    
            public CustomerRegistrationStep(WebApplicationFactory<Startup> webApplicationFactory)
            {
                _customer = new Customer();
    
                var factory = webApplicationFactory.WithWebHostBuilder(builder =>
                {
                    builder.ConfigureAppConfiguration((host, config) =>
                    {
                        config.AddTestConfig(host.HostingEnvironment);
                    });
                });
    
                _client = factory.CreateClient();
            }
    
            [Given(@"I have entered Name (.*) into the form")]
            public void GivenIHaveEnteredNameIntoTheForm(string name)
            {
                _customer.Name = name;
            }
    
            [Given(@"I have entered Last Name (.*) into the form")]
            public void GivenIHaveEnteredLastNameIntoTheForm(string lastName)
            {
                _customer.LastName = lastName;
            }
    
            [When(@"I press add")]
            public async Task WhenIPressAdd()
            {
                var json = Utf8Json.JsonSerializer.PrettyPrint(Utf8Json.JsonSerializer.Serialize(_customer));
    
                _response = await _client
                    .PostAsync("/api/CustomerRegistration",
                        new StringContent(json, Encoding.UTF8
                            , "application/json"))
                    .ConfigureAwait(false);
            }
    
            [Then(@"the result should be the Full Name registered")]
            public async Task ThenTheResultShouldBeTheFullNameRegistered()
            {
                _response.EnsureSuccessStatusCode();
    
                var body = await _response.Content.ReadAsStringAsync().ConfigureAwait(false);
                var actual = JsonConvert.DeserializeObject<int>(body);
    
                actual.Should().NotBe(0);
            }
    
            [Then(@"the result should be the Full Name unregistered")]
            public void ThenTheResultShouldBeTheFullNameUnregistered()
            {
                _response.StatusCode.Should().Be(HttpStatusCode.BadRequest);
            }
    
        }

    ConfigurationExtensions.cs

    public static class ConfigurationExtensions
        {
            public static IConfigurationBuilder AddTestConfig(this IConfigurationBuilder configurationBuilder, IWebHostEnvironment webHostEnvironment)
            {
                configurationBuilder.AddJsonFile("appsettings.json", optional: true, reloadOnChange: false)
                                    .AddJsonFile($"appsettings.{webHostEnvironment.EnvironmentName}.json", optional: true, reloadOnChange: false);
    
                return configurationBuilder;
            }
        }

    HooksConfig.cs

    [Binding]
        public sealed class HooksConfig
        {
            [StepArgumentTransformation(@"<null>")]
            public string ToNull()
            {
                return null;
            }
        }

    docker-compose.yml

    version: '3.4'
    
    services:
        sql_server:
            image: mcr.microsoft.com/mssql/server:latest
            ports:
                - "1433:1433"
            environment:
                SA_PASSWORD: 'P@ssword'
                ACCEPT_EULA: 'Y'

    PowerShell

    docker-compose up --build

    Visit original content creator repository
    https://github.com/H4ck3rBatera/SpecFlow.Gherkin

  • nginx-oidc-core

    nginx-openid-connect

    Reference implementation of NGINX Plus as relying party for OpenID Connect authentication

    Description

    This repository describes how to enable OpenID Connect integration for NGINX Plus. The solution depends on NGINX Plus components (auth_jwt module and key-value store) and as such is not suitable for open source NGINX.

    OpenID Connect components

    Figure 1. High level components of an OpenID Connect environment

    This implementation assumes the following environment:

    • The identity provider (IdP) supports OpenID Connect 1.0
    • The authorization code flow is in use
    • NGINX Plus is configured as a relying party
    • The IdP knows NGINX Plus as a confidential client or a public client using PKCE

    With this environment, both the client and NGINX Plus communicate directly with the IdP at different stages during the initial authentication event.

    OpenID Connect protocol diagram Figure 2. OpenID Connect authorization code flow protocol

    NGINX Plus is configured to perform OpenID Connect authentication. Upon a first visit to a protected resource, NGINX Plus initiates the OpenID Connect authorization code flow and redirects the client to the OpenID Connect provider (IdP). When the client returns to NGINX Plus with an authorization code, NGINX Plus exchanges that code for a set of tokens by communicating directly with the IdP.

    The ID Token received from the IdP is validated. NGINX Plus then stores the ID token in the key-value store, issues a session cookie to the client using a random string, (which becomes the key to obtain the ID token from the key-value store) and redirects the client to the original URI requested prior to authentication.

    Subsequent requests to protected resources are authenticated by exchanging the session cookie for the ID Token in the key-value store. JWT validation is performed on each request, as normal, so that the ID Token validity period is enforced.

    For more information on OpenID Connect and JWT validation with NGINX Plus, see Authenticating Users to Existing Applications with OpenID Connect and NGINX Plus.

    Refresh Tokens

    If a refresh token was received from the IdP then it is also stored in the key-value store. When validation of the ID Token fails (typically upon expiry) then NGINX Plus sends the refresh token to the IdP. If the user’s session is still valid at the IdP then a new ID token is received, validated, and updated in the key-value store. The refresh process is seamless to the client.

    Logout

    Requests made to the /logout location invalidate both the ID token and refresh token by erasing them from the key-value store. Therefore, subsequent requests to protected resources will be treated as a first-time request and send the client to the IdP for authentication. Note that the IdP may issue cookies such that an authenticated session still exists at the IdP.

    Multiple IdPs

    Where NGINX Plus is configured to proxy requests for multiple websites or applications, or user groups, these may require authentication by different IdPs. Separate IdPs can be configured, with each one matching on an attribute of the HTTP request, for example, hostname or part of the URI path.

    Note: When validating OpenID Connect tokens, NGINX Plus can be configured to read the signing key (JWKS) from disk, or a URL. When using multiple IdPs, each one must be configured to use the same method. It is not possible to use a mix of both disk and URLs for the map…$oidc_jwt_keyfile variable.

    Installation

    Start by installing NGINX Plus. In addition, the NGINX JavaScript module (njs) is required for handling the interaction between NGINX Plus and the OpenID Connect provider (IdP). Install the njs module after installing NGINX Plus by running one of the following:

    $ sudo apt install nginx-plus-module-njs for Debian/Ubuntu

    $ sudo yum install nginx-plus-module-njs for CentOS/RHEL

    The njs module needs to be loaded by adding the following configuration directive near the top of nginx.conf.

    load_module modules/ngx_http_js_module.so;

    Finally, create a clone of the GitHub repository.

    $ git clone https://github.com/nginxinc/nginx-openid-connect

    Note: There is a branch for each NGINX Plus release. Switch to the correct branch to ensure compatibility with the features and syntax of each release. The main branch works with the most recent NGINX Plus and JavaScript module releases.

    All files can be copied to /etc/nginx/conf.d

    Non-standard directories

    The GitHub repository contains include files for NGINX configuration, and JavaScript code for token exchange and initial token validation. These files are referenced with a relative path (relative to /etc/nginx). If NGINX Plus is running from a non-standard location then copy the files from the GitHub repository to /path/to/conf/conf.d and use the -p flag to start NGINX with a prefix path that specifies the location where the configuration files are located.

    $ nginx -p /path/to/conf -c /path/to/conf/nginx.conf

    Running in containers

    This implementation is suitable for running in a container provided that the base image includes the NGINX JavaScript module. The GitHub repository is designed to facilitate testing with a container by binding the cloned repository to a mount volume on the container.

    $ cd nginx-openid-connect
    $ docker run -d -p 8010:8010 -v $PWD:/etc/nginx/conf.d nginx-plus nginx -g 'daemon off; load_module modules/ngx_http_js_module.so;'

    Running locally in containers

    This implementation supports that you could locally test in a container. You could find the details what to set up here before running docker-compose up.

    $ cd nginx-openid-connect
    $ docker-compose up

    Running behind another proxy or load balancer

    When NGINX Plus is deployed behind another proxy, the original protocol and port number are not available. NGINX Plus needs this information to construct the URIs it passes to the IdP and for redirects. By default NGINX Plus looks for the X-Forwarded-Proto and X-Forwarded-Port request headers to construct these URIs.

    Configuring your IdP

    • Create an OpenID Connect client to represent your NGINX Plus instance

      • Choose the authorization code flow
      • Set the redirect URI to the address of your NGINX Plus instance (including the port number), with /_codexch as the path, for example, https://my-nginx.example.com:443/_codexch
      • Ensure NGINX Plus is configured as a confidential client (with a client secret) or a public client (with PKCE S256 enabled)
      • Make a note of the client ID and client secret if set
    • If your IdP supports OpenID Connect Discovery (usually at the URI /.well-known/openid-configuration) then use the configure.sh script to complete configuration. In this case you can skip the next section. Otherwise:

      • Obtain the URL for jwks_uri or download the JWK file to your NGINX Plus instance
      • Obtain the URL for the authorization endpoint
      • Obtain the URL for the token endpoint
      • Obtain the URL for the user info endpoint
      • Obtain the URL for the end session endpoint for logout
      • Obtain the scopes for setting $oidc_scopes

    Configuring NGINX Plus

    Configuration can typically be completed automatically by using the configure.sh script.

    Manual configuration involves reviewing the following files so that they match your IdP(s) configuration.

    • oidc_idp.conf – this contains the primary configuration for one or more IdPs in map{} blocks

      • Modify all of the map…$oidc_ blocks to match your IdP configuration.
      • Modify the URI defined in map…$oidc_logout_redirect to specify an unprotected resource to be displayed after requesting the /logout location.
      • Set a unique value for $oidc_hmac_key to ensure nonce values are unpredictable.
      • If NGINX Plus is deployed behind another proxy or load balancer, modify the map…$redirect_base and map…$proto blocks to define how to obtain the original protocol and port number.
    • oidc_frontend_backend.conf – this is the reverse proxy configuration

      • Modify the upstream group to match your frontend site or backend app.
      • Configure the preferred listen port and enable SSL/TLS configuration.
      • Modify the severity level of the error_log directive to suit the deployment environment.
      • Comment/uncomment the x-id-token in the proxy header for the API request based on your needs. The default is to only set Bearer $access_token in the header.
    • oidc_nginx_http.conf – this is the NGINX configuration in http block for handling the various stages of OpenID Connect authorization code flow

      • No changes are usually required here
      • Modify $post_logout_return_uri if you want to redirect to a custom logout page rather than the default page after successful logout from the IdP.
      • Modify $return_token_to_client_on_login if you want to return id_token to the frontend app with a query parameter after successful login from the IdP. Otherwise the default value is empty.
    • oidc_nginx_server.conf – this is the NGINX configuration in each server block for handling the various stages of OpenID Connect authorization code flow

      • No changes are usually required here
      • Modify the resolver directive to match a DNS server that is capable of resolving the IdP defined in $oidc_token_endpoint
      • If using auth_jwt_key_request to automatically fetch the JWK file from the IdP then modify the validity period and other caching options to suit your IdP
      • The /login endpoint: Comment/uncomment the auth_jwt_key_file or auth_jwt_key_request directives based on whether $oidc_jwt_keyfile is a file or URI, respectively
    • oidc.js – this is the JavaScript code for performing the authorization code exchange and nonce hashing

      • No changes are required unless modifying the code exchange or validation process

    Configuring the Key-Value Store

    The key-value store is used to maintain persistent storage for ID tokens, access tokens, and refresh tokens. The default configuration should be reviewed so that it suits the environment. This is part of the advanced configuration in oidc_nginx_http.conf. Modify the directory of key-value store in state field per your system requirement.

    keyval_zone zone=oidc_id_tokens:1M     state=conf.d/oidc_id_tokens.json     timeout=1h;
    keyval_zone zone=oidc_access_tokens:1M state=conf.d/oidc_access_tokens.json timeout=1h;
    keyval_zone zone=refresh_tokens:1M     state=conf.d/refresh_tokens.json     timeout=8h;
    keyval_zone zone=oidc_pkce:128K                                             timeout=90s;

    Each of the keyval_zone parameters are described below.

    • zone – Specifies the name of the key-value store and how much memory to allocate for it. Each session will typically occupy 1-2KB, depending on the size of the tokens, so scale this value to exceed the number of unique users that may authenticate.

    • state (optional) – Specifies where all of the Tokens in the key-value store are saved, so that sessions will persist across restart or reboot of the NGINX host. The NGINX Plus user account, typically nginx, must have write permission to the directory where the state file is stored. Consider creating a dedicated directory for this purpose.

    • timeout – Expired tokens are removed from the key-value store after the timeout value. This should be set to value slightly longer than the JWT validity period. JWT validation occurs on each request, and will fail when the expiry date (exp claim) has elapsed. If JWTs are issued without an exp claim then set timeout to the desired session duration. If JWTs are issued with a range of validity periods then set timeout to exceed the longest period.

    • sync (optional) – If deployed in a cluster, the key-value store may be synchronized across all instances in the cluster, so that all instances are able to create and validate authenticated sessions. Each instance must be configured to participate in state sharing with the zone_sync module and by adding the sync parameter to the keyval_zone directives above.

    Session Management

    The NGINX Plus API is enabled in oidc_nginx_server.conf so that sessions can be monitored. The API can also be used to manage the current set of active sessions.

    To query the current sessions in the key-value store:

    $ curl localhost:8010/api/6/http/keyvals/oidc_id_tokens
    $ curl localhost:8010/api/6/http/keyvals/oidc_access_tokens
    $ curl localhost:8010/api/6/http/keyvals/oidc_refresh_tokens

    To delete a single session:

    $ curl -iX PATCH -d '{"<session ID>":null}' localhost:8010/api/6/http/keyvals/oidc_id_tokens
    $ curl -iX PATCH -d '{"<session ID>":null}' localhost:8010/api/6/http/keyvals/oidc_access_tokens
    $ curl -iX PATCH -d '{"<session ID>":null}' localhost:8010/api/6/http/keyvals/refresh_tokens

    To delete all sessions:

    $ curl -iX DELETE localhost:8010/api/6/http/keyvals/oidc_id_tokens
    $ curl -iX DELETE localhost:8010/api/6/http/keyvals/oidc_access_tokens
    $ curl -iX DELETE localhost:8010/api/6/http/keyvals/refresh_tokens

    Real time monitoring

    The oidc_nginx_server.conf file defines several status_zone directives to collect metrics about OpenID Connect activity and errors. Separate metrics counters are recorded for:

    • OIDC start – New sessions are counted here. See step 2 in Figure 2, above. Success is recorded as a 3xx response.

    • OIDC code exchange – Counters are incremented here when the browser returns to NGINX Plus after authentication. See steps 6-10 in Figure 2, above. Success is recorded as a 3xx response.

    • OIDC logout – Requests to the /logout URI are counted here. Success is recorded as a 3xx response.

    • OIDC error – Counters are incremented here when errors in the code exchange process are actively detected. Typically there will be a corresponding error_log entry.

    To obtain the current set of metrics:

    $ curl localhost:8010/api/6/http/location_zones

    In addition, the NGINX Plus Dashboard can be configured to visualize the monitoring metrics in a GUI.

    Troubleshooting

    Any errors generated by the OpenID Connect flow are logged to the error log, /var/log/nginx/error.log. Check the contents of this file as it may include error responses received by the IdP. The level of detail recorded can be modified by adjusting the severity level of the error_log directive.

    • 400 error from IdP

      • This is typically caused by incorrect configuration related to the client ID and client secret.
      • Check the values of the map…$oidc_client and map…$oidc_client_secret variables against the IdP configuration.
    • 500 error from nginx after successful authentication

      • Check for could not be resolved and empty JWK set while sending to client messages in the error log. This is common when NGINX Plus cannot reach the IdP’s jwks_uri endpoint.
      • Check the map…$oidc_jwt_keyfile variable is correct.
      • Check the resolver directive in openid_connect.server_conf is reachable from the NGINX Plus host.
      • Check for OIDC authorization code sent but token response is not JSON. messages in the error log. This is common when NGINX Plus cannot decompress the IdP’s response. Add the following configuration snippet to the /_jwks_uri and /_token locations to openid_connect.server_conf:
        proxy_set_header Accept-Encoding "gzip";
    • Authentication is successful but browser shows too many redirects

      • This is typically because the JWT sent to the browser cannot be validated, resulting in ‘authorization required’ 401 response and starting the authentication process again. But the user is already authenticated so is redirected back to NGINX, hence the redirect loop.
      • Avoid using auth_jwt_require directives in your configuration because this can also return a 401 which is indistinguishable from missing/expired JWT.
      • Check the error log /var/log/nginx/error.log for JWT/JWK errors.
      • Ensure that the JWK file (map…$oidc_jwt_keyfile variable) is correct and that the nginx user has permission to read it.
    • Logged out but next request does not require authentication

      • This is typically caused by the IdP issuing its own session cookie(s) to the client. NGINX Plus sends the request to the IdP for authentication and the IdP immediately sends back a new authorization code because the session is still valid.
      • Check your IdP configuration if this behavior is not desired.
    • Failed SSL/TLS handshake to IdP

      • Indicated by error log messages including peer closed connection in SSL handshake (104: Connection reset by peer) while SSL handshaking to upstream.
      • This can occur when the IdP requires Server Name Indication (SNI) information as part of the TLS handshake. Additional configuration is required to satisfy this requirement.
      • Edit oidc_nginx_server.conf and for each of the /_jwks_uri, /_token, /_refresh and /userinfo locations, add the following configuration snippet:
    proxy_set_header Host <IdP hostname>;
    proxy_ssl_name        <IdP hostname>;

    Support

    This reference implementation for OpenID Connect is supported for NGINX Plus subscribers.

    Changelog

    • R15 Initial release of OpenID Connect reference implementation
    • R16 Added support for opaque session tokens using key-value store
    • R17 Configuration now supports JSON Web Key (JWK) set to be obtained by URI
    • R18 Opaque session tokens now used by default. Added support for refresh tokens. Added /logout location.
    • R19 Minor bug fixes
    • R22 Separate configuration file, supports multiple IdPs. Configurable scopes and cookie flags. JavaScript is imported as an indepedent module with js_import. Container-friendly logging. Additional metrics for OIDC activity.
    • R23 PKCE support. Added support for deployments behind another proxy or load balancer.
    • R25 Added access_token to proxy to the backend. Added endpoints of /login and /userinfo. Enhanced /logout endpoint with post logout uri. Token configuration to forward and return. Configurable query and path param to the IDP endpoint. Bundled test page. Enhanced session management.
    Visit original content creator repository https://github.com/nginx-openid-connect/nginx-oidc-core
  • geoiojpg

    GeoIOJpg

    A wrapper around GeoIO for non-georeferenced jpg files. Simply give the path to a jpg file and a shapely object describing the geospatial boundary of that image.

    Install

    Note that GeoIO only works with python 2, therefore GeoIOJpg also has this same limitation

    pip install geoiojpg
    

    Usage

    from shapely.geometry import box
    from geoiojpg import GeoImage
    
    # Path to some satellite image in jpg format
    img_file = './img.jpg'
    
    # bounding box (somewhere in Madagascar)
    boundary = box(49.482, -15.908, 49.570, -15.996)
    
    # Create the GeoImage (this returns a regular geoio GeoImage)
    geoimg = GeoImage(img_file, boundary)
    
    print(geoimg.raster_to_proj(50, 50))
    # > (49.4826462689568, -15.9084275189568)

    How it works

    GeoIOJpg will build a VRT in XML format out of the supplied metadata. Basically, it will compute the geotransform based on the supplied shapely object and height/width of the image. It will assume that all jpg files are in RGB format. This will then call the GeoIO constructor with the temporary VRT file. After the GeoIO object has been created, the temporary file is deleted.

    Visit original content creator repository
    https://github.com/ArnholdInstitute/geoiojpg

  • resource-router

    resource-router

    Build

    Angular routing engine that drive views by media types. It loads data itself, and by response Content-Type header it displays configured view. It is a replacement for original Angular Router (they cannot be used at the same time).

    The aim of this library is to allow building of RESTful clients using Angular, following HATEOAS principle.

    See CHANGELOG for release changes.

    Installation

    npm i angular-resource-router --save
    

    Documentation

    See API documentation for latest released version.

    Configuration

    Sample snippet how is the router configured. It is very similar to original router, but instead of registering paths, we are registering media types.

    import { NgModule } from '@angular/core';
    import { BrowserModule } from '@angular/platform-browser';
    import { ResourceRouterModule } from 'angular-resource-router';
    import { AppComponent } from './app.component';
    import { SampleComponent } from './sample.component';
    import { ErrorComponent } from './error.component';
    
    @NgModule({
      declarations: [
        AppComponent,
        SampleComponent
      ],
      imports: [
        BrowserModule,
        ResourceRouterModule.configure({
          prefix: 'api/'
        }),
        ResourceRouterModule.forTypes([
          {
            type: 'application/x.sample',
            component: SampleComponent
          },
          {
            status: '*',
            type: '*',
            component: ErrorComponent
          }
        ])
      ],
      bootstrap: [
        AppComponent
      ]
    })
    export class ExampleModule {
    }

    How It Works

    TODO

    Library

    Build of the library is performed with

    npm run build
    

    Publishing

    To publish new library version to npm repository:

    1. Verify/set library version in src/lib/package.json
    2. Make sure there are release notes in the CHANGELOG.
    3. Commit and push changes, if any
    4. Create a new release on GitHub, in the format v0.0.0 (or alternatively use git directly)
    5. Copy changelog markdown text to the release
    6. Build the library and make sure all tests passed
      npm run build
      
    7. Publish the library
      npm publish dist/angular-resource-router
      
    8. Publish docs, if this is head release
      npm run ghpages
      
      Note: For this command to succeed, you must have ssh agent running at the background.
    9. Merge the branches if needed

    Demo app

    Local development server can be started with

    npm start
    

    TODO

    Things that are yet to be implemented

    • Complete README
    • Complete example
    • Support for resolve and data route configs
    • Support for outlet layouts, outlet resolve
    • Outlet context data (name etc)
    • Navigation within outlet
    • Hide element if empty link
    • External navigation for unknown type
    • Build and publish docs
    • Typedoc
    Visit original content creator repository https://github.com/mdvorak/resource-router
  • ayah-sender

    issues license actions

    AyahSender is a Python package that allows you to have Quranic audio and images easily.

    Features

    • Fetch a single ayah audio.
    • Merge multiple ayahs into a single audio file.
    • Supports various reciters.
    • Saves audio files in MP3 format.
    • Save transparent .png file of an ayah

    Installation

    Install the package via pip:

    pip install ayah-sender

    Requirements

    • requests
    • pydub
    • ffmpeg – PyDub requires ffmpeg to perform its operations.

    Follow the articles to install ffmpeg on your system if you don’t have it installed.

    Usage

    Basic Usage

    Here is an example of how to use AyahSender:

    from ayah_sender import AyahSender
    
    ayahSender = AyahSender()
    
    # Show available reciters
    reciters_dict = ayahSender.reciter.show_reciters()
    print(reciters_dict)
    
    # Fetch a single ayah's audio
    audio_data = ayahSender.get_single_ayah(reciter_id=1, chapter_num=1, verse_num=1)
    
    # Save the single ayah audio
    ayahSender.save_audio(audio_data, output_dir='.')
    
    # Merge multiple ayahs' audio
    merged_audio_data = ayahSender.merge_ayahs(reciter_id=5, chapter_num=1, start_verse=1, end_verse=5)
    
    # Save the merged audio file
    ayahSender.save_audio(merged_audio_data, output_dir='.')
    
    # Getting png image of an ayah
    ayahSender.get_image(chapter_num=2, verse_num=255, output_dir='ayah-png')

    Functions

    get_total_verses_in_chapter(chapter_number)

    • Fetches the total number of verses in a given chapter.

    get_single_ayah(reciter_id, chapter_num, verse_num)

    • Fetches a single ayah from a specified reciter, chapter, and verse.

    merge_ayahs(reciter_id, chapter_num, start_verse, end_verse)

    • Fetches and merges a range of ayahs from a specified reciter and chapter.

    save_audio(audio_data, output_dir=".")

    • Saves the audio data to the specified directory.

    get_image(chapter_num, verse_num, output_dir='.')

    • Fetches and saves png image of an ayah

    Reciters List

    The reciters.csv file contains the list of reciters. The Reciter class reads this file to fetch reciter information.

    Example

    See examples folder for examples.

    Contributing

    Feel free to submit issues or pull requests. For major changes, please open an issue first to discuss what you would like to change.

    License

    MIT License

    Contact

    For any queries, contact us at dev.asib@proton.me.

    Acknowledgements

    • everyayah.com – Jazahumullahu Khairan to them for the audio files.
    • Quran.com – Jazahumullahu Khairan to them for the API.

    Enjoy using AyahSender for your Quranic audio needs!

    Visit original content creator repository https://github.com/codewithasib/ayah-sender
  • Positioning-Popular-Consumer-Electronics-Brands-Using-Principal-Component-Analysis

    Positioning-Popular-Consumer-Electronics-Brands-Using-Principal-Component-Analysis

    Background

    In the fast-paced world of consumer electronics, brand perception can make or break a company. Let’s dive into the minds of consumers and uncover the hidden patterns shaping the industry’s biggest names.

    This analysis delves into the perceptions of 10 major tech brands, using advanced statistical techniques to uncover insights that could reshape marketing strategies.

    I analyze consumer brand perception survey data (consumer electronics_brand_ratings_updated.csv) for popular consumer electronic device brands.

    image

    This data reflects consumer ratings of brands with regard to perceptual adjectives as expressed on survey items with the following form:

    On a scale from 1 to 10—where 1 is least and 10 is most—how [ADJECTIVE] is [BRAND A]?

    In this dataset, an observation is one respondent’s rating of a brand on one of the adjectives. Two such items might be:

    1. How trendy is Sony?
    2. How much of a category leader is Bose?

    Such ratings are collected for all the combinations of adjectives and brands of interest.

    The multidimensional data here comprise simulated ratings of 10 brands on 9 adjectives (“performance,” “leader,” “latest,” “fun,” and so forth), for N = 100 simulated respondents.

    Screenshot 2024-03-06 at 7 50 50 PM

    Rescaling and Clustering the Data

    I take our initial data and make some adjustments to the first 9 columns to standardize them to a common measure (z-scoring). This is making sure all parts of the data are using the same scale. Then, I use the corrplot package and library to visualize how closely related each pair of variables (like questions in a survey about brand attributes) are to each other. The strength of the relationship between two variables is shown by how close together or far apart they are on the map. Hierarchical clustering orders the variables in the correlation plot, and sorts the variables in such a way that variables with similar patterns of correlation are grouped together in clusters in the plot, as below:

    image

    The plot suggests the following:

    • Certain brand attributes are closely related and form clusters. For example, attributes like “fun,” “trendy,” and “latest” might be one cluster, indicating that brands perceived as fun also tend to be seen as trendy and up-to-date.

    • The size and color of the circles indicate the strength and direction of the relationship between pairs of attributes. Large, dark blue circles represent strong positive correlations, meaning as one attribute increases, the other tends to increase as well. Conversely, large, dark red circles represent strong negative correlations, where an increase in one attribute corresponds to a decrease in the other.

    • Smaller circles or those closer to white suggest little to no correlation between those attributes, indicating that they are perceived independently of one another.

    Aggregate Mean Ratings by Brand

    What is the average (mean) position of the brand on each adjective? To answer this, I first gather all data for each brand and calculate the average. Using our tools to paint a heatmap, which shows how similar or different brands compare on various attributes. Some brands are strongly associated with certain attributes. For example, dark blue indicates a strong positive association, and a brand with dark blue in “perform” is seen as high performing.

    image

    For each brand, these are the perceptions:

    • Sony – known for high performance and serious technology.
    • LG – also stands for reliable performance and innovation.
    • Bose – a leader in audio with a serious approach to sound quality.
    • Canon – a trusted brand for photographic equipment, valuing performance.
    • Apple – often seen as a leader, innovative, and trendy.
    • Hewlett-Packard (HP) – a long-standing brand known for value and performance in computing.
    • Dell – a serious and value-oriented brand in computing technology.
    • Epson – known for reliable performance in printing technology.
    • Asus – balances performance with value, also trendy in the gaming community.
    • JBL – provides fun and trendy audio products with good performance.

    Sony and LG have high ratings for “rebuy” and “value” but score low on “latest” and “fun”. This could indicate that while customers are loyal and see value in these brands, they may not view them as the most cutting-edge or entertaining compared to others. This could suggest a strategy focused on highlighting new innovations and features in their latest products to shift consumer perception towards seeing Sony and LG as more contemporary and exciting brands.

    Bose and Canon share similarities in certain attributes. They could possibly be perceived well in terms of quality or reliability but might need to work on other aspects such as being viewed as budget-friendly or innovative.

    Apple, HP and Dell are typically strong brands, often perceived as leaders in innovation and quality. Grouped together, it suggests they share a strong market perception across several attributes. They should continue to leverage their strengths in marketing strategies.

    Asus and JBL are paired for attributes like “fun” and “bargain”, which might imply that consumers see these brands as offering enjoyable products at a good price point. This pairing could suggest a strategy focused on marketing products that bring enjoyment to consumers at a competitive price.

    Principal Component Analysis (PCA)

    PCA helps make things simpler by finding out which parts of your data tell you the most about the whole set without having to look at every little piece. When you compare several brands across many dimensions, it can be helpful to focus on just the first two or three principal components that explain variation in the data. I find the components with the PCA function prcomp().

    I can select how many components out of the 9 to focus on using a scree plot, which shows how much variation in the data is explained by each principal component. A scree plot is often interpreted as indicating where additional components are not worth the complexity; this occurs where the line has an elbow, a kink in the angle of bending where the graph stops dropping off sharply and starts to flatten out. This is a somewhat subjective determination, and in this case we consider it to be around the second or third component:

    image

    Findings

    A biplot of the PCA solution for the mean ratings gives an interpretable correspondence map, showing where the brands are placed with respect to the first two principal components:

    image

    The actual names or the specific meanings of PC1 and PC2 are not directly provided in the biplot and these components are new super-attributes that are created by combining the original attributes in specific ways. They are mathematical constructs that capture certain variances and are defined by their loadings on the original variables (the brand attributes). The ‘loadings’ are the correlations or connections each original variable (like “fun,” “perform,” etc.) has with a given component. They tell us how much a change in the original variable would move the component. A high loading means that attribute strongly influences the PC, while a low loading means it has less influence.

    • PC1 (Horizontal Axis): This principal component captures the largest variance in the dataset. This dimension likely captures perceptions related to price, value, or affordability. Brands on the left like LG and Sony are associated with “value” and “bargain”, while premium brands like Bose, Canon, and Apple are on the right side.

    • PC2 (Vertical Axis): The second principal component often captures variance not accounted for by the first. This dimension seems to represent perceptions around innovation, trendiness, or performance quality. Brands higher up like Bose, Canon, and Apple are labeled as “serious/leader”, “trendy”, and “latest”, suggesting high-performance or cutting-edge attributes. Lower brands like JBL and Asus are aligned with “fun”, potentially implying a focus on excitement, customer experience, entertainment or casual use cases.

    This information matters because it simplifies complex data into a few key ‘themes’. Instead of juggling dozens of individual attributes, you can concentrate on a few big-picture components. It makes analysis and decision-making more focused and manageable.

    Dell and Epson are positioned near the center, suggesting a more neutral or mainstream perception.

    Recommendations

    Epson’s goal to be a safe brand that appeals to many consumers meant that its relatively undifferentiated position was desirable sp far. However, the new CMO wishes the brand to have a strong, differentiated space where no brand is positioned. In the correspondence map, there is a large gap between the group Sony/LG on the bottom of the chart, versus Bose/Canon on the upper left. This area might be described as the “value leader” area or similar.

    How do we find out how to position there? Let’s assume that the gap reflects approximately the average of those four brands. We can find that average using colMeans() on the brands’ rows, and then take the difference of Epson from that average:

    Screenshot 2024-03-06 at 10 17 59 PM

    This suggests that brand Epson could target the gap by increasing its emphasis on performance while reducing emphasis on “latest” and “fun.”

    Conclusion: Charting the Course in a Dynamic Market

    This analysis provides a data-driven roadmap for understanding and navigating the complex terrain of tech brand perceptions. For Epson, the path forward involves a strategic shift towards a ‘value performance leader’ position, filling a gap in the current market landscape.

    As the tech industry continues to evolve, regular reassessment of brand perceptions will be crucial. This analytical approach offers a powerful tool for tracking shifts in consumer sentiment and identifying emerging opportunities.

    Visit original content creator repository https://github.com/apoorvadudani/Positioning-Popular-Consumer-Electronics-Brands-Using-Principal-Component-Analysis
  • dag

    Stories in Ready
    dag

    Directed Acyclic Graphs for Haskell

    Description

    This is a type-safe directed acyclic graph library for Haskell. This library
    differs from others in a number of ways:

    • Edge construction is incremental, creating a “schema”:

      {-# LANGUAGE DataKinds #-}
      
      import Data.Graph.DAG.Edge
      
      -- | Edges are statically defined:
      edges = ECons (Edge :: EdgeValue "foo" "bar") $
        ECons (Edge :: EdgeValue "bar" "baz") $
        ECons (Edge :: EdgeValue "foo" "baz")
        unique -- ENil, but casted for uniquely edged graphs
    • The nodes are separate from edges; graph may be not connected:

      data Cool = AllRight
                | Radical
                | SuperDuper
      
      graph = GCons "foo" AllRight $
        GCons "bar" Radical $
        GCons "baz" SuperDuper $
        GNil edges
    • Type safety throughout edge construction:

      *Data.Graph.DAG> :t edges
      edges
        :: EdgeSchema
             '['EdgeType "foo" "bar", 'EdgeType "bar" "baz",
               'EdgeType "foo" "baz"] -- Type list of edges
             '['("foo", '["bar", "baz"]), '("bar", '["baz"])] -- potential loops
             'True -- uniqueness
    • Various type-level computation

      *Data.Graph.DAG> :t getSpanningTrees $ edges
      getSpanningTrees $ edges
        :: Data.Proxy.Proxy
             '['Node "foo" '['Node "bar" '['Node "baz" '[]],
                             'Node "baz" '[]],
               'Node "bar" '['Node "baz" '[]],
               'Node "baz" '[]]
      
      *Data.Graph.DAG> reflect $ getSpanningTrees $ edges
      [Node "foo" [Node "bar" [Node "baz" []]
                  ,Node "baz" []]
      ,Node "bar" [Node "baz" []]
      ,Node "baz" []]

    This library is still very naive, but it will give us compile-time enforcement
    of acyclicity in these graphs – ideal for dependency graphs.

    Usage

    You will need -XDataKinds for the type-level symbols:

    {-# LANGUAGE DataKinds #-}
    
    import Data.Graph.DAG
    import GHC.TypeLits
    
    ...

    Visit original content creator repository
    https://github.com/athanclark/dag

  • Appunti-LFC

    Appunti di Linguaggi Formali e Compilatori

    logo

    Indice

    Il progetto

    Questo testo è una dispensa di appunti scritta da studenti; lo scopo è quello di raccogliere i contenuti del corso di Linugaggi Formali e Compilatori e organizzarli secondo un’esposizione quanto più completa, efficace ed intuitiva possibile, tanto per lo studente desideroso di ottenere un’ottima padronanza degli argomenti, quanto anche per lo studente pigro in cerca di risorse per “portare a casa” l’esame.

    Gli appunti sono stati presi durante il corso di Linguaggi Formali e Compilatori tenuto dalla professoressa Paola Quaglia per il Corso di Laurea in Informatica, DISI, Università degli studi di Trento, anno accademico 2020-2021. I contenuti provengono quindi primariamente dalle lezioni della professoressa, mentre invece ordine ed esposizione sono in gran parte originali. Allo stesso modo, la maggior parte degli assets (figure, grafi, tabelle, pseudocodici) sono contenutisticamente tratti dal materiale della professoressa, ma ricreati e molto spesso manipolati dagli autori; inoltre, per ottenere il risultato appena citato, è stato molto spesso necessario abbandonare quasi del tutto l’esposizione della professoressa e usarla, appunto, come canovaccio per svilupparne una originale.

    Maggiori informazioni sul progetto e e sugli autori possono essere trovate nella prefazione dell’elaborato.

    Segnalazione errori

    Se durante la lettura doveste incorrere in errori di qualsiasi tipo, tra gli altri errori di battitura, errori concettuali o di impaginazione, vi chiediamo di fare una segnalazione; ve ne saremo riconoscenti e provvederemo a correggere quanto prima. Se siete arrivati a questo punto assumiamo una buona familiarità con Github, per cui come canali per segnalare errori:

    • aprire una Github issue, se possibile referenziando all’interno del corpo anche la porzione di codice in cui è presente l’errore
    • se volete direttamente proporre un vostro fix, potete clonare la repo (istruzioni per la build qui) e aprire una pull request con i commit che risolvono l’errore, vi daremo un riscontro quanto prima

    Se preferite non segnalare l’errore tramite Github potete comunque contattarci personalmente tramite gli indirizzi email che trovate sui nostri profili Github, oppure con la mail istituzionale nome.cognome@studenti.unitn.it, o naturalmente con mezzi più informali.

    How to build

    Standard build chain

    Prerequisiti:

    • una distribuzione TeX, ad esempio MiKTeX o TeXLive
    • pip, per cui assicuratevi di aver installato Python
    • se volete compilare utilizzando il tool Arara, allora dovrete avere una JVM installata

    A questo punto:

    1. clonate la repository con git clone https://github.com/filippodaniotti/Appunti-LFC
    2. installate il pacchetto Pygments con pip install Pygments
    3. compilate, ad esempio:
      • lanciando tre volte pdflatex -shell-escape main.tex
      • lanciando latexmk -pdf -shell-escape main.tex
      • per una compilazione rapida, potete utilizzare Arara con arara main.tex (dovete necessariamente trovarvi nella directory ~/src/), sarà equivalente a lanciare una sola passata di pdflatex

    È possibile compilare singolarmente ogni capitolo e ogni asset, è sufficiente lanciare la compilazione sul singolo .tex desiderato.

    Se lavorate con degli IDE o con degli editor in coppia con dei tool per la scrittura LaTeX (e.g. VS Code + LaTeX Workshop o Atom + latex), assicuratevi di attivare il flag -shell-escape dalle impostazioni di compilazione del vostro tool.

    How to build with Docker

    Se non disponete dei prerequisiti per la build indicati sopra (o non volete installarli system-wide), potete buildare con docker.
    Se poi avete sia VS Code che Docker, potete riferirvi a questo gist per una configurazione già pronta.

    Prerequisiti:

    • Docker
    • almeno 5/6 GB liberi su disco

    Procedimento:

    1. clonate la repository con git clone https://github.com/filippodaniotti/Appunti-LFC
    2. se non siete su linux, buildate l’immagine docker contenuta nel Dockerfile con docker build -t dispensa_lfc .
      Nel caso siate su linux, usate questo comando per buildare docker build -t dispensa_lfc --build-arg UID=$(id -u) --build-arg GID=$(id -g) . (si assicura che il vostro userId e groupId corrispondano a quelli che il container userà)
    3. avviate un container, ricordandovi di montare la cartella della dispensa/ Esempio: docker run -ti --rm -v $(pwd):/dispensa --name dispensa_lfc dispensa_lfc
    4. ora avete accesso ad un ambiente con tutte le dipendenze installate e potete buildare usando i comandi della sezione how to build (ad eccezione di Arara, perché il container non ha una JVM installata)

    Principali pacchetti impiegati

    • standalone per gestire la compilazione autonoma di capitoli e assets
    • tabularx per la gestione delle tabelle
    • forest per la generazione degli alberi
    • tikz con librerie automata per la generazione dei grafi
    • algorithm2e per la scrittura degli pseudocodici
    • minted per la scrittura di codice

    La squadra

    Visit original content creator repository https://github.com/filippodaniotti/Appunti-LFC
  • kyoml

    KyoML

    A dynamic markup language with support for directives and plugins. More details on kyoml.com

    codecov Specs

    Basic usage

    npm install --save kyoml

    const kyoml = require('kyoml');
    
    const json = kyoml.compile(`
      key1 = 'value1'
      key2 = 'value2'
    
      block {
        key3 = [1,2,3]
      }
    `)

    Directives

    Directives can be added anywhere in the document with an @ prefix. In essence they are functions that allow you to alter your document and create enriched experiences.

    const kyoml = require('kyoml');
    
    const json = kyoml.compile(`
      key1 = 'value1'
      key2 = 'value2'
    
      block {
        @uppercase
    
        key3 = [1,2,3]
      }
    `, {
      directives: {
        uppercase: (node) => {
          const {
            root,   // The root of the entire document
            base,   // The direct parent on which element you're operating on is
            key,    // The key on the base which contains the element
            path,   // The full path from the root (e.g document.block)
            value,  // The element itself
            set,    // A handy replace method
            get     // A getter allowing you to access anything in the entire document
          } = node;
    
          // You can now operate on your node
    
          set(_.mapKeys(value, key => key.toUpperCase()));
        }
      }
    })

    Output:

    {
      "key1": "value1",
      "key2": "value2",
      "block": {
        "KEY3": [1,2,3]
      }
    }

    Directives with arguments

    Directives can also take arguments, e.g

    const kyoml = require('kyoml');
    
    const json = kyoml.compile(`
      @prefix("foo_")
    
      key1 = 'value1'
      key2 = 'value2'
    
      block {
        key3 = [1,2,3]
      }
    `, {
      directives: {
        prefix: ({ value }, prefix) => {
          set(_.mapKeys(value, key => prefix + key));
        }
      }
    })

    Output:

    {
      "foo_key1": "value1",
      "foo_key2": "value2",
      "foo_block": {
        "key3": [1,2,3]
      }
    }

    Mappers

    Mappers are simply directives, which replace the value directly

    const kyoml = require('kyoml');
    
    const json = kyoml.compile(`
      @addProperty("foo")
    
      key1 = 'value1'
      key2 = 'value2'
    `, {
      mappers: {
        addProperty: (value, key) => ({ ...value, [key]: 'bar' })
      }
    })

    Output:

    {
      "key1": "value1",
      "key2": "value2",
      "foo": "bar"
    }

    Piping

    Values can be piped into directives using the directional |> or |< operators

    e.g

    const kyoml = require('kyoml');
    
    const json = kyoml.compile(`
      key1 = 'value1' |> @uppercase
      key2 = @uppercase <| 'value2'
    `, {
      mappers: {
        uppercase: (value) => (value.toUpperCase())
      }
    })

    Output:

    {
      "key1": "VALUE1",
      "key2": "VALUE2"
    }

    String interpolation

    String interpolation is supported using ${}

    Example

    block {
      hello = 'hello'
    
      subblock {
        array = ["world"]
        sentence = "${block.hello} ${block.subblock.array[0]}"
      }
    }
    

    NOTE: only double quoted strings will be interpolated

    Supported types

    • Strings
    • Numbers
    • Arrays
    • Object
    • Booleans

    Roadmap

    • Primitive types
    • Comments
    • Blocks
    • Directives
    • Pipes
    • VSCode syntax hightlighting
    • Playground
    • CLI converter
    Visit original content creator repository https://github.com/kyoml/kyoml
  • backpack.tf-ws-service

    Visit original content creator repository
    https://github.com/LucasHenriqueDiniz/backpack.tf-ws-service