Blog

  • mlstack

    mlstack

    (work in progress)

    ML Infrastructure stack and playground for local (for now) machine learning development. Including workflow engine, experiment tracking, model serving (upcoming), and more.

    Includes the following components:

    Quick Start

    Install poetry and flytectl

    poetry install

    Kubernetes

    For local development cluster and how to start refer to kubernetes/README.md.

    Flyte

    To test flyte locally using the local development cluster perform the following steps – (inspired by).

    Make sure the hosts entry for the fully qualified domain name (FQDN) is set to the storage service (e.g. minio). Pyflyte gets the minio endpoint from the flyteadmin service. In the cluster flyte is configured to use a particular storage service which is forwarded to the client(pyflyte). If the FQDN is not set to the storage service the client will not be able to connect to the storage service.

    127.0.0.1       ml-minio.default.svc.cluster.local

    Make sure your flyte config is defined – default in ~/.flyte/config – see flyte docs for more details.

    mkdir -p ~/.flyte && \
    cat << EOF > ~/.flyte/config
    admin:
      # grpc endpoint
      endpoint: localhost:8089
      authType: Pkce
      insecure: true
    logger:
      show-source: true
      level: 6
    EOF

    Make sure that the flyte/flyteadmin service is running and corresponding ports are forwarded (see kubernetes/README.md).

    Executes a flyte workflow remotely.

    pyflyte run --remote samples/hello_flyte.py my_wf

    MLFlow

    Test artifact storage:

    poetry run python samples/mlflow/artifacts.py --tracking_uri http://localhost:8888/mlflow

    Test experiment tracking:

    poetry run python samples/mlflow/tracking.py --tracking_uri http://localhost:8888/mlflow

    Visit original content creator repository
    https://github.com/clemens33/mlstack

  • spawn-x-effects

    spawn-x-effects

    Interceptor (middleware) for spawn-x.

    Effects is a small interceptor that enables make cascading state updates, when one action initialize many other actions.

    install

    With npm:

    npm install spawn-x spawn-x-effects --save
    

    With yarn:

    yarn add spawn-x spawn-x-effects
    

    With bower:

    bower install spawn-x spawn-x-effects --save
    

    <script src="path/to/spawn-x/lib/spawn-x.umd.min.js"></script>
    <script src="path/to/spawn-x-effects/lib/spawn-x-effects.umd.min.js"></script>

    Usage

    app/store/index.js

    import { createStore, addInterceptor } from 'spawn-x';
    import { effects } from 'spawn-x-effects';
    
    import { logger } from '../interceptors/logger';
    import { tracksEffect } from '../effects/tracks';
    import { renderEffect } from '../effects/render';
    
    const initialState = {
      tracks: [
        'Puddle Of Mudd - Control',
        'American Hi-Fi - Flavor Of The Weak',
        'SR-71 - What A Mess'
      ]
    }
    
    //inject effects interceptor 
    const store = createStore(
      initialState,
      addInterceptor(logger, effects)
    );
    
    //add effect into effects interceptor
    effects.run(tracksEffect);
    effects.run(renderEffect);
    
    export {
      store
    }

    Effect is just a function which accept store and action and then updates state

    app/effects/tracks.js

    import {
      ADD_TRACK,
      UPDATE_STORE_INIT,
      UPDATE_STORE,
      UPDATE_STORE_COMPLETE,
      NEED_RENDER
    } from '../constants';
    
    
    const tracksEffect = (store, action) => {
      switch (action.type) {
        case ADD_TRACK: {
          store.update('', {
            type: UPDATE_STORE_INIT,
            data: null
          });
          store.update('tracks', {
            type: UPDATE_STORE,
            data: store.select('tracks') ? store.select('tracks').concat(action.data) : [].concat(action.data)
          });
          store.update('', {
            type: UPDATE_STORE_COMPLETE,
            data: null
          });
          store.update('', {
            type: NEED_RENDER,
            data: {
              render: store.select('tracks')
            }
          });
          break;
        }
      }
    }
    
    export {
      tracksEffect
    }

    app/effects/render.js

    import {
      NEED_RENDER,
      RENDER_INIT,
      RENDER_COMPLETE
    } from '../constants';
    
    import { render } from '../methods/render';
    
    
    const renderEffect = (store, action) => {
      switch (action.type) {
        case NEED_RENDER: {
          store.update('', {
            type: RENDER_INIT,
            data: null
          });
    
          render(action.data.render);
    
          store.update('', {
            type: RENDER_COMPLETE,
            data: null
          });
          break;
        }
      }
    }
    
    export {
      renderEffect
    }

    app/actions/tracks.js

    import { store } from '../store';
    import { ADD_TRACK } from '../constants';
    
    
    const addTrack = data => {
      store.update('', {
        type: ADD_TRACK,
        data: data
      });
    }
    
    export {
      addTrack
    }

    And other files…

    app/constants/index.js

    export const ADD_TRACK = 'ADD_TRACK';
    export const UPDATE_STORE_INIT = 'UPDATE_STORE_INIT';
    export const UPDATE_STORE = 'UPDATE_STORE';
    export const UPDATE_STORE_COMPLETE = 'UPDATE_STORE_COMPLETE';
    export const NEED_RENDER = 'NEED_RENDER';
    export const RENDER_INIT = 'RENDER_INIT';
    export const RENDER_COMPLETE = 'RENDER_COMPLETE';

    app/methods/render.js

    const render = tracks => {
      const list = document.querySelector('#trackList');
    
      list.innerHTML = '';
    
      if (tracks === null) tracks = [];
    
      tracks.forEach(item => {
        const li = document.createElement('li');
    
        li.textContent = item;
        list.appendChild(li);
      });
    }
    
    export {
      render
    }

    app/interceptors/logger.js

    function logger(store) {
      return next => action => {
        console.log('action: ', action.type + ': ', JSON.parse(JSON.stringify(action.data)));
        next(action);
      }
    }
    
    export {
      logger
    }

    app/index.js

    import '../index.html';
    import { store } from './store';
    import { render } from './methods/render';
    import { addTrack } from './actions/tracks';
    
    
    const btn = document.querySelector('#addTrack');
    const input = document.querySelector('#input');
    
    btn.addEventListener('click', () => {
      addTrack(input.value);
      input.value = '';
    });
    
    render(store.select('tracks'));

    index.html

    <!DOCTYPE html>
    <html lang="en">
    <head>
      <meta charset="UTF-8">
      <meta name="viewport" content="width=device-width, initial-scale=1.0">
      <title>App</title>
    </head>
    <body>
      <input type="text" id="input">
      <button id="addTrack">Add track</button>
      <ul id="trackList"></ul>
    
      <script src="dist/app.bundle.js"></script>
    </body>
    </html>

    LICENSE

    MIT © Alex Plex

    Visit original content creator repository
    https://github.com/atellmer/spawn-x-effects

  • nocuous

    nocuous

    CI jsr.io/@higher-order-testing/nocuous jsr.io/@higher-order-testing/nocuous score

    A static code analysis tool for JavaScript and TypeScript.

    Installing the CLI

    If you want to install the CLI, you would need to have Deno installed first and then on the command line, you would want to run the following command:

    $ deno install --name nocuous --allow-read --allow-net -f jsr:@higher-order-testing/nocuous/cli

    You can also “pin” to a specific version by using nocuous@{version} instead, for example jsr:@higher-order-testing/nocuous@1.1.0/cli.

    The CLI comes with integrated help which can be accessed via the --help flag.

    Using the API

    If you want to incorporate the API into an application, you need to import it into your code. For example the following will analyze the Deno std assertion library and its dependencies resolving with a map of statistics:

    import { instantiate, stats } from "jsr:@higher-order-testing/nocuous";
    
    await instantiate();
    
    const results = await stats(new URL("https://jsr.io/@std/assert/1.0.6/mod.ts"));
    
    console.log(results);

    Architecture

    The tool uses swc as a Rust library to parse code and then run analysis over the parsed code. It is then compiled to Web Assembly and exposed as an all-in-one API. Code is loaded via the JavaScript runtime and a resolver can be provided to allow for custom resolution logic.

    Background

    The statistics collected around code toxicity are based directly on Erik Dörnenburg’s article How toxic is your code?.

    The default metrics are based on what is suggested in the article. When applying to TypeScript/JavaScript there are some adaptation that is required:

    Metric Table Label Description Default Threshold
    File length L The number of lines in a file. 500
    Class data abstraction coupling CDAC The number of instances of other classes that are “new”ed in a given class. 10
    Anon Inner Length AIL Class expressions of arrow functions length in number of lines. 35
    Function Length FL The number of statements in a function declaration, function expression, or method declaration. 30
    Parameter Number P The number of parameters for a function or method 6
    Cyclomatic Complexity CC The cyclomatic complexity for a function or method 10
    Binary Expression Complexity BEC How complex a binary expression is (e.g. how many && and `
    Missing Switch Default MSD Any switch statements that are missing the default case. 1

    Copyright 2019 – 2024 Kitson P. Kelly. MIT License.

    Visit original content creator repository https://github.com/h-o-t/nocuous
  • pyLateMon

    Docker Based Latency Monitor

    Docker container(s) which tracks latency of one or many hosts and reports to InfluxDBv2.

    Description

    This docker container is able to track the latency of one or many targets and reports all data to a given InfluxDBv2.

    It´s based on python3 an makes usage of following python libraries:

    • pythonping
    • influxdb_client
    • threading
    • sys
    • os
    • datetime
    • configparser
    • time

    You can use it in standalone or full stack mode.

    Standalone:

    • Just the latency-monitor container which sends data to an external InfluxDB2 Server

    Full Stack:

    • Traefik container as Proxy (full TLS support)
    • InfluxDB2 container, fully setup and ready to take data
    • Grafana container, fully setup and connected (but without dashboards)
    • latency-monitor container sending data to the InfluxDB2 container

    Requirements

    • Docker (CE)
    • Docker-Compose
    • InfluxDB Version >= 2
    • pythonping needs root privileges so same for the container

    Configuration (GENERAL)

    Configuration can be passed via ENV OR configuration file.

    In case of using the ENV option you are just able to monitor ONE target for more targets please use the configuration file.

    Also some influx connection options are just configurable via config file but normally they are not needed.

    Behaviour

    Per default the used python influxdb connector will cache all replies and sends them bundled every 30 seconds to the Influx DB.

    Actually the latency-monitor container is build on demand, a dockerhub image is on the roadmap…

    You can find everything under ./Docker_Build/ and in the python program itself latency_monitor.py


    ENV Variables

    Name Example Usage Option/Must Type Default
    INFLUX_URL http://10.0.0.1:8086 InfluxDB Host must URL
    INFLUX_TOKEN eWOcp-MCv2YPlER7wc…0zRNnrIoTqZAg== InfluxDB API Token must String
    INFLUX_BUCKET latency InfluxDB Bucket must String
    INFLUX_ORG MyOrg InfluxDB Organization must String
    TARGET_HOST 8.8.8.8 Monitored Host (IP/FQDN) must FQDN or IP
    TARGET_TIMEOUT 0.5 ping timeout in sec. optional Float >0 1
    TARGET_TIMER 3 ping frequency in sec. optional Int >1 5
    TARGET_LOCATION Google decript. location optional String unknown


    Config File

    Instead of using the ENV variables you can use a config file.

    Keep in mind it´s a OR decision not a AND

    See ./latency-monitor/config-template.ini

    ENV wins over file

    Docker-Compose Style

    uncomment:

    # - ./latency-monitor/config.ini:/app/config.ini:ro # UNCOMMENT IF NEEDED
    

    Docker-CLI Style

    docker latency-monitor -v ./latency-monitor/config.ini:/app/config.ini:ro
    


    Configuration (Standalone)

    1st thing to do is creating the docker-compose.yml from docker-compose-standalone.yml:

    cp docker-compose-standalone.yml docker-compose.yml
    

    Variables

    Below paragraph:

    ####################################################
    # LATENCY-MONITOR
    ####################################################
    

    in the .env file (env needs to be renamed to .env) configure following variables:

    • YOUR_ORGANIZATION
    • YOUR_BUCKET_NAME
    • YOUR_ADMIN_TOKEN
    • YOUR_MONITORED_TARGET
    • YOUR_MONITORED_TARGET_TIMEOUT
    • YOUR_MONITORED_TARGET_TIMER
    • YOUR_MONITORED_TARGET_LOCATION

    Lets go

    docker-compose up -d latency-monitor
    

    should do the job



    Configuration (Full-Stack)

    Easy peasy automatic mode

    Have a look at ./setup-full_stack.sh

    Just create a valid .env File by:

    cp env .env
    

    and edit it to your needs.

    After everyting within .env is in order just do:

    ./setup-full_stack.sh
    

    Everything should be right in place now.

    Just the certificates are missing look here

    Now run it and mybe pick a example dashboard for grafana from here

    BACKUPS FILES ???

    The script will backup following files if found:

    • ./docker-compose.yml
    • ./grafana/provisioning/datasources/grafana-datasource.yml


    WTF manual mode

    REALLY???

    You need to set all on your own:

    Variables

    You need to configure Variables in following files to make the compose work:

    • file
      • VARIABLE1
      • VARIABLE2
      • VARIABLE3

    • docker-compose.yml (generated from docker-compose-full_stack.yml)
      • PLACE_YOUR_FQDN_HERE (3 times)

    • .env (env needs to be renamed to .env)
      • YOUR_PATH_TO_CONTAINER_STATIC_DATA
      • YOUR_ADMIN_USER
      • YOUR_ADMIN_PASSWORD
      • YOUR_ORGANIZATION
      • YOUR_BUCKET_NAME
      • YOUR_ADMIN_TOKEN
      • YOUR_MONITORED_TARGET
      • YOUR_MONITORED_TARGET_TIMEOUT
      • YOUR_MONITORED_TARGET_TIMER
      • YOUR_MONITORED_TARGET_LOCATION

    • grafana/provisioning/datasources/grafana-datasource.yml (generated from grafana/grafana-datasource-template.yml)
      • YOUR_ADMIN_TOKEN
      • YOUR_ORGANIZATION
      • YOUR_BUCKET_NAME

    File Permissions

    Because we are configuring grafana for permanent data storing and grafana actually runs with UID + GID: 472:472 it´s necessary to change permisson of die permanent storage directory we have configured.

    The directory build from the following config part of grafana within the docker-compose.yml:

    ${MyPath}/grafana/var_lib
    

    MyPath was configured earlier in the .env file.

    so let´s assume the following:

    MyPath = /opt/docker/containers/

    then you have to do the following

    chown -R 472:472 /opt/docker/containers/grafana/var_lib
    

    Everything should be right in place now.

    Just the certificates are missing look here

    Now just start over and maybe pick an example dashboard for grafana from here



    Certificate

    Traefik will act as a proxy and ensures the usage of TLS so it needs your certificate and key file.

    within the docker-compose.yml you will find:

          - ./traefik/mycert.crt:/certs/cert.crt:ro
          - ./traefik/mycert.key:/certs/privkey.key:ro
    

    so please place your certificate file as ./traefik/mycert.crt and the key file as ./traefik/mycert.key.

    Thats it

    Grafana Dashboard Examples

    Within the local path ./examples/grafana/ you can find example .json files which can be imported to grafana as dashboards to give you a first point to start with.



    Authors

    Contributors names and contact info

    Version History

    • v0.4a

      • fixed tag recognition for buildprocess
    • v0.4

      • moved from self build iamge to dockerhub
    • v0.3a

      • ping timeout added
      • cleanup
    • v0.3

      • setup-script fixed and backup added
      • fixed latency value problem (was sometimes string instead of float)
      • cleanup
    • v0.2b

      • cleanup
    • v0.2a

      • fixed some missing variables
      • fixe a missing integer declaration in latency-monitor
      • added automatic config creation for full-stack
      • cleanups
    • v0.1

      • Initial Release

    License

    free to use

    Visit original content creator repository
    https://github.com/planet-espresso/pyLateMon

  • GNNBoundary

    GNNBoundary: Towards Explaining Graph Neural Networks through the Lens of Decision Boundaries

    Xiaoqi Wang1,   Han-Wei Shen1,  

    1The Ohio State University,  

    ICLR 2024

    🚀 Overview

    GNNBoundary

    📖 Introduction

    While Graph Neural Networks (GNNs) have achieved remarkable performance on various machine learning tasks on graph data, they also raised questions regarding their transparency and interpretability. Recently, there have been extensive research efforts to explain the decision-making process of GNNs. These efforts often focus on explaining why a certain prediction is made for a particular instance, or what discriminative features the GNNs try to detect for each class. However, to the best of our knowledge, there is no existing study on understanding the decision boundaries of GNNs, even though the decision-making process of GNNs is directly determined by the decision boundaries. To bridge this research gap, we propose a model-level explainability method called GNNBoundary, which attempts to gain deeper insights into the decision boundaries of graph classifiers. Specifically, we first develop an algorithm to identify the pairs of classes whose decision regions are adjacent. For an adjacent class pair, the near-boundary graphs between them are effectively generated by optimizing a novel objective function specifically designed for boundary graph generation. Thus, by analyzing the near-boundary graphs, the important characteristics of decision boundaries can be uncovered. To evaluate the efficacy of GNNBoundary, we conduct experiments on both synthetic and public real-world datasets. The results demonstrate that, via the analysis of faithful near-boundary graphs generated by GNNBoundary, we can thoroughly assess the robustness and generalizability of the explained GNNs.

    Paper: https://openreview.net/pdf?id=WIzzXCVYiH

    🔗 Prior Work – GNNInterpreter

    GNNBoundary is inspired by our prior work, GNNInterpreter, which aims to explain the high-level decision-making process of GNNs. Please check this repository for more details.

    Paper: https://openreview.net/forum?id=rqq6Dh8t4d

    🔥 How to use

    Notebooks

    • gnnboundary_collab.ipynb contains the demo for the COLLAB dataset experiment in the paper.
    • gnnboundary_enzymes.ipynb contains the demo for the ENZYME dataset experiment in the paper.
    • gnnboundary_motif.ipynb contains the demo for the Motif dataset experiment in the paper.
    • model_training.ipynb contains the demo for GNN classifier training.

    Model Checkpoints

    • You can find the GNN classifier checkpoints in the ckpts folder.
    • See model_training.ipynb for how to load the model checkpoints.

    Datasets

    • Here’s the link for downloading the processed datasets.
    • After downloading the datasets zip, please unzip it in the root folder.

    Environment

    Codes in this repo have been tested on python3.10 + pytorch2.1 + pyg2.5.

    To reproduce the exact python environment, please run:

    conda create -n gnnboundary poetry jupyter
    conda activate gnnboundary
    poetry install
    ipython kernel install --user --name=gnnboundary --display-name="GNNBoundary"

    Note: In case poetry fails to install the dependencies, you can manually install them using pip:

    pip install -r requirements.txt

    🖼️ Demo

    demo

    🖊️ Citation

    If you used our code or find our work useful in your research, please consider citing:

    @inproceedings{wang2024gnnboundary,
    title={{GNNB}oundary: Towards Explaining Graph Neural Networks through the Lens of Decision Boundaries},
    author={Xiaoqi Wang and Han Wei Shen},
    booktitle={The Twelfth International Conference on Learning Representations},
    year={2024},
    url={https://openreview.net/forum?id=WIzzXCVYiH}
    }
    

    🙏 Acknowledgement

    The work was supported in part by the US Department of Energy SciDAC program DE-SC0021360, National Science Foundation Division of Information and Intelligent Systems IIS-1955764, and National Science Foundation Office of Advanced Cyberinfrastructure OAC-2112606.

    Visit original content creator repository https://github.com/yolandalalala/GNNBoundary
  • AnyKernel3


    AnyKernel3 – Flashable Zip Template for Kernel Releases with Ramdisk Modifications

    by osm0sis @ xda-developers

    “AnyKernel is a template for an update.zip that can apply any kernel to any ROM, regardless of ramdisk.” – Koush

    AnyKernel2 pushed the format further by allowing kernel developers to modify the underlying ramdisk for kernel feature support easily using a number of included command methods along with properties and variables to customize the installation experience to their kernel. AnyKernel3 adds the power of topjohnwu’s magiskboot for wider format support by default, and to automatically detect and retain Magisk root by patching the new Image.*-dtb as Magisk would.

    A script based on Galaxy Nexus (tuna) is included for reference. Everything to edit is self-contained in anykernel.sh.

    // Properties / Variables

    kernel.string=KernelName by YourName @ xda-developers
    do.devicecheck=1
    do.modules=1
    do.systemless=1
    do.cleanup=1
    do.cleanuponabort=0
    device.name1=maguro
    device.name2=toro
    device.name3=toroplus
    device.name4=tuna
    supported.versions=6.0 - 7.1.2
    supported.patchlevels=2019-07 -
    
    block=/dev/block/platform/omap/omap_hsmmc.0/by-name/boot;
    is_slot_device=0;
    ramdisk_compression=auto;
    

    do.devicecheck=1 specified requires at least device.name1 to be present. This should match ro.product.device, ro.build.product, ro.product.vendor.device or ro.vendor.product.device from the build.prop files for your device. There is support for as many device.name# properties as needed. You may remove any empty ones that aren’t being used.

    do.modules=1 will push the .ko contents of the modules directory to the same location relative to root (/) and apply correct permissions. On A/B devices this can only be done to the active slot.

    do.systemless=1 (with do.modules=1) will instead push the full contents of the modules directory to create a simple “ak3-helper” Magisk module, allowing developers to effectively replace system files, including .ko files. If the current kernel is changed then the kernel helper module automatically removes itself to prevent conflicts.

    do.cleanup=0 will keep the zip from removing its working directory in /tmp/anykernel (by default) – this can be useful if trying to debug in adb shell whether the patches worked correctly.

    do.cleanuponabort=0 will keep the zip from removing its working directory in /tmp/anykernel (by default) in case of installation abort.

    supported.versions= will match against ro.build.version.release from the current ROM’s build.prop. It can be set to a list or range. As a list of one or more entries, e.g. 7.1.2 or 8.1.0, 9 it will look for exact matches, as a range, e.g. 7.1.2 - 9 it will check to make sure the current version falls within those limits. Whitespace optional, and supplied version values should be in the same number format they are in the build.prop value for that Android version.

    supported.patchlevels= will match against ro.build.version.security_patch from the current ROM’s build.prop. It can be set as a closed or open-ended range of dates in the format YYYY-MM, whitespace optional, e.g. 2019-04 - 2019-06, 2019-04 - or - 2019-06 where the last two examples show setting a minimum and maximum, respectively.

    block=auto instead of a direct block filepath enables detection of the device boot partition for use with broad, device non-specific zips. Also accepts specifically boot, recovery or vendor_boot.

    is_slot_device=1 enables detection of the suffix for the active boot partition on slot-based devices and will add this to the end of the supplied block= path. Also accepts auto for use with broad, device non-specific zips.

    ramdisk_compression=auto allows automatically repacking the ramdisk with the format detected during unpack. Changing auto to gz, lzo, lzma, xz, bz2, lz4, or lz4-l (for lz4 legacy) instead forces the repack as that format, and using cpio or none will (attempt to) force the repack as uncompressed.

    patch_vbmeta_flag=auto allows automatically using the default AVBv2 vbmeta flag on repack, and use the Magisk configuration Canary 23016+. Set to 0 forces keeping whatever is in the original AVBv2 flags, and set to 1 forces patching the flag (only necessary on few devices).

    customdd="<arguments>" may be added to allow specifying additional dd parameters for devices that need to hack their kernel directly into a large partition like mmcblk0, or force use of dd for flashing.

    slot_select=active|inactive may be added to allow specifying the target slot. If omitted the default remains active.

    no_block_display=1 may be added to disable output of the detected final used partition+slot path for zips which choose to include their own custom output instead.

    // Command Methods

    ui_print "<text>" [...]
    abort ["<text>" [...]]
    contains <string> <substring>
    file_getprop <file> <property>
    
    set_perm <owner> <group> <mode> <file> [<file2> ...]
    set_perm_recursive <owner> <group> <dir_mode> <file_mode> <dir> [<dir2> ...]
    
    dump_boot
    split_boot
    unpack_ramdisk
    
    backup_file <file>
    restore_file <file>
    replace_string <file> <if search string> <original string> <replacement string> <scope>
    replace_section <file> <begin search string> <end search string> <replacement string>
    remove_section <file> <begin search string> <end search string>
    insert_line <file> <if search string> <before|after> <line match string> <inserted line>
    replace_line <file> <line replace string> <replacement line> <scope>
    remove_line <file> <line match string> <scope>
    prepend_file <file> <if search string> <patch file>
    insert_file <file> <if search string> <before|after> <line match string> <patch file>
    append_file <file> <if search string> <patch file>
    replace_file <file> <permissions> <patch file>
    patch_fstab <fstab file> <mount match name> <fs match type> block|mount|fstype|options|flags <original string> <replacement string>
    patch_cmdline <cmdline entry name> <replacement string>
    patch_prop <prop file> <prop name> <new prop value>
    patch_ueventd <ueventd file> <device node> <permissions> <chown> <chgrp>
    
    repack_ramdisk
    flash_boot
    flash_generic <partition name>
    write_boot
    
    reset_ak [keep]
    setup_ak
    

    “if search string” is the string it looks for to decide whether it needs to add the tweak or not, so generally something to indicate the tweak already exists. “cmdline entry name” behaves somewhat like this as a match check for the name of the cmdline entry to be changed/added by the patch_cmdline function, followed by the full entry to replace it. “prop name” also serves as a match check in patch_prop for a property in the given prop file, but is only the prop name as the prop value is specified separately.

    Similarly, “line match string” and “line replace string” are the search strings that locate where the modification needs to be made for those commands, “begin search string” and “end search string” are both required to select the first and last lines of the script block to be replaced for replace_section, and “mount match name” and “fs match type” are both required to narrow the patch_fstab command down to the correct entry.

    “scope” may be specified as “global” to force all instances of the string/line targeted by replace_string, replace_line or remove_line to be replaced/removed accordingly. Omitted or set to anything else and it will perform the default first-match action.

    “before|after” requires you simply specify “before” or “after” for the placement of the inserted line, in relation to “line match string”.

    “block|mount|fstype|options|flags” requires you specify which part (listed in order) of the fstab entry you want to check and alter.

    dump_boot and write_boot are the default method of unpacking/repacking, but for more granular control, or omitting ramdisk changes entirely (“OG AK” mode), these can be separated into split_boot; unpack_ramdisk and repack_ramdisk; flash_boot respectively. flash_generic can be used to flash an image to the corresponding partition. It is automatically included for dtbo and vendor_dlkm in write_boot but can be called separately if using “OG AK” mode or creating a simple partition flashing only zip.

    Multi-partition zips can be created by removing the ramdisk and patch folders from the zip and including instead “-files” folders named for the partition (without slot suffix), e.g. boot-files + recovery-files, or kernel-files + ramdisk-files (on some Treble devices). These then contain Image.gz, and ramdisk, patch, etc. subfolders for each partition. To setup for the next partition, simply set block= (without slot suffix) and ramdisk_compression= for the new target partition and use the reset_ak command.

    Similarly, multi-slot zips can be created with the normal zip layout for the active (current) slot, then resetting for the inactive slot by setting block= (without slot suffix) again, slot_select=inactive and ramdisk_compression= for the target slot and using the reset_ak keep command, which will retain the patch and any added ramdisk files for the next slot.

    backup_file may be used for testing to ensure ramdisk changes are made correctly, transparency for the end-user, or in a ramdisk-only “mod” zip. In the latter case restore_file could also be used to create a “restore” zip to undo the changes, but should be used with caution since the underlying patched files could be changed with ROM/kernel updates.

    You may also use ui_print “<text>” to write messages back to the recovery during the modification process, abort “<text>” to abort with optional message, and file_getprop “<file>” “<property>” and contains “<string>” “<substring>” to simplify string testing logic you might want in your script.

    // Binary Inclusion

    The AK3 repo includes current ARM builds of magiskboot, magiskpolicy and busybox by default to keep the basic package small. Builds for other architectures and optional binaries (see below) are available from the latest Magisk zip, or my latest AIK-mobile and FlashIt packages, respectively, here:

    https://forum.xda-developers.com/t/tool-android-image-kitchen-unpack-repack-kernel-ramdisk-win-android-linux-mac.2073775/ (Android Image Kitchen thread)
    https://forum.xda-developers.com/t/tools-zips-scripts-osm0sis-odds-and-ends-multiple-devices-platforms.2239421/ (Odds and Ends thread)

    Optional supported binaries which may be placed in /tools to enable built-in expanded functionality are as follows:

    • mkbootfs – for broken recoveries, or, booted flash support for a script/app via bind mount to /tmp (deprecated/use with caution)
    • flash_erase, nanddump, nandwrite – MTD block device support for devices where the dd command is not sufficient
    • dumpimage, mkimage – DENX U-Boot uImage format support
    • mboot – Intel OSIP Android image format support
    • unpackelf, mkbootimg – Sony ELF kernel.elf format support, repacking as AOSP standard boot.img for unlocked bootloaders
    • elftool (with unpackelf) – Sony ELF kernel.elf format support, repacking as ELF for older Sony devices
    • mkmtkhdr (with unpackelf) – MTK device boot image section headers support for Sony devices
    • futility + chromeos test keys directory – Google ChromeOS signature support
    • boot_signer-dexed.jar + avb keys directory – Google Android Verified Boot 1.0 (AVBv1) signature support
    • rkcrc – Rockchip KRNL ramdisk image support

    Optionally moving ARM builds to tools/arm and putting x86 builds in tools/x86 will enable architecture detection for use with broad, device non-specific zips.

    // Instructions

    1. Place final kernel build product, e.g. Image.gz-dtb or zImage to name a couple, in the zip root (any separate dt, dtb or recovery_dtbo, dtbo and/or vendor_dlkm should also go here for devices that require custom ones, each will fallback to the original if not included)

    2. Place any required ramdisk files in /ramdisk (/vendor_ramdisk for simple multi-partition vendor_boot support) and module files in /modules (with the full path like /modules/system/lib/modules)

    3. Place any required patch files (generally partial files which go with AK3 file editing commands) in /patch (/vendor_patch for simple multi-partition vendor_boot support)

    4. Modify the anykernel.sh to add your kernel’s name, boot partition location, permissions for any added ramdisk files, and use methods for any required ramdisk modifications (optionally, also place banner and/or version files in the root to have these displayed during flash)

    5. zip -r9 UPDATE-AnyKernel3.zip * -x .git README.md *placeholder

    The LICENSE file must remain in the final zip to comply with licenses for binary redistribution and the license of the AK3 scripts.

    If supporting a recovery that forces zip signature verification (like Cyanogen Recovery) then you will need to also sign your zip using the method I describe here:

    https://forum.xda-developers.com/t/dev-template-complete-shell-script-flashable-zip-replacement-signing-script.2934449/

    Not required, but any tweaks you can’t hardcode into the source (best practice) should be added with an additional init.tweaks.rc or bootscript.sh to minimize the necessary ramdisk changes. On newer devices Magisk allows these within /overlay.d – see examples.

    It is also extremely important to note that for the broadest AK3 compatibility it is always better to modify a ramdisk file rather than replace it.

    If running into trouble when flashing an AK3 zip, the suffix -debugging may be added to the zip’s filename to enable creation of a debug .tgz of /tmp for later examination while booted or on desktop.

    // Staying Up-To-Date

    Now that you’ve got a ready zip for your device, you might be wondering how to keep it up-to-date with the latest AnyKernel commits. AnyKernel2 and AnyKernel3 have been painstakingly developed to allow you to just drop in the latest update-binary and tools directory and have everything “just work” for beginners not overly git or script savvy, but the best practice way is as follows:

    1. Fork my AnyKernel3 repo on GitHub

    2. git clone https://github.com/<yourname>/AnyKernel3

    3. git remote add upstream https://github.com/osm0sis/AnyKernel3

    4. git checkout -b <devicename>

    5. Set it up like your zip (i.e. remove any folders you don’t use like ramdisk or patch, delete README.md, and add your anykernel.sh and optionally your Image.*-dtb if you want it up there) then commit all those changes

    6. git push --set-upstream origin <devicename>

    7. git checkout master then repeat steps 4-6 for any other devices you support

    Then you should be able to git pull upstream master from your master branch and either merge or cherry-pick the new AK3 commits into your device branches as needed.

    For further support and usage examples please see the AnyKernel3 XDA thread: https://forum.xda-developers.com/t/dev-template-anykernel3-easily-mod-rom-ramdisk-pack-image-gz-flashable-zip.2670512/

    Have fun!

    Visit original content creator repository
    https://github.com/karthik558/AnyKernel3

  • mapcomp

    MapComp

    Genetic Map Comparison

    Introduction

    MapComp facilitates visual comparisons among linkage maps of closely-related species in order to assess their quality and to simplify the exploration of their chromosomal differences. The novelty of the approach lies in the use of a reference genome in order to maximize the number of comparable marker pairs among pairs of maps, even when completely different library preparation protocols have been used to generate the markers. As such, MapComp requires a reference genome, at least a contig-level genome assembly, for a species that is phylogenetically close to the target species.

    Using MapComp

    The main steps in using MapComp are:

    • Get a reference genome and put here: 02_data/genome/genome.fasta
    • Index the reference genome (bwa index 02_data/genome/genome.fasta)
    • Get marker data from two or more taxa
    • Prepare .csv marker file (see 02_data/tutorial_markers.csv for exact format)
    • Prepare markers fasta file automatically from .csv file
    • Run mapcomp, which will:
      • Map marker sequences on reference genome scaffolds
      • Filter out non-unique and bad quality alignments
      • Keep only the best marker pairs
      • Create figures

    Dependencies

    In order to use MapComp, you will need the following:

    • Linux or MacOS
    • Python 2.7
    • numpy (Python library)
    • bwa
    • samtools (1.x release)
    • The R statistical language

    If you are using a Debian derived Linux distribution, for example Ubuntu or Linux Mint, you can install all the required tools with the following command:

    sudo apt-get install bwa samtools r-base-core
    

    Tutorial

    A tutorial data set of markers for two species and a reference genome are included in MapComp. Both the genome and marker data used for the tutorial were created in silico. As a result, the figures will look really perfect. However, the goal of the tutorial to run a full MapComp analysis once to learn how to use it with your real data. Additionally, the tutorial .csv data file serves as an example of the exact format required for the marker .csv file, which contains the marker information for the analyzed species.

    Once you have produced the figures from the tutorial data, then using MapComp on your data will be as easy as preparing the .csv file, automatically creating the markers fasta file, getting and indexing the reference genome and running ./mapcomp.

    Tutorial run

    # Rename and index genome
    cp 02_data/genome/tutorial_genome.fasta 02_data/genome/genome.fasta
    bwa index 02_data/genome/genome.fasta
    
    # Prepare fasta file
    ./01_scripts/00_prepare_input_fasta_file_from_csv.sh 02_data/tutorial_markers.csv
    
    # Run mapcomp
    ./mapcomp
    

    You can now look at the figures in the 04_figures folder and at the linkage group correspondance among the species in the 05_results folder.

    Data preparation

    In order to compare linkage maps, you will need to collect the following information about each marker:

    • Species name (eg: hsapiens)
    • Linkage Group number (eg: 1, 2, 3…)
    • Position in centi Morgans, or cM (eg: 0, 5.32, 22.8)
    • Marker Identifier (eg: marker0001)
    • Marker Nucleotide Sequence (60 base pairs of more)

    Once you have all this information about the markers, you will need to create a .csv file containing these informations. The .csv file will feature one extra column containing zeroes and be in the following format:

    SpeciesName,LG,Position,Zeroes,markerName,markerSequence
    

    Here is what the .csv file may look like:

    hsapiens,1,0.58,0,marker0001,CGGCACCTCCACTGCGGCACGAAGAGTTAGGCCCCGTGCTTTGCGG
    hsapiens,1,5.74,0,marker0002,CGGCACCTCCACTGCGGCACGAAGAGTTAGGCCCCGTGCTTTGCGG
    ...
    hsapiens,1,122.39,0,marker0227,CGGCACCTCCACTGCGGCACGAAGAGTTAGGCCCCGTGCTTTGCGG
    

    Use the 02_data/tutorial_markers.csv file as a template for your own .csv file.

    Note that:

    • There is no header line in the .csv file
    • There are 6 columns of information
    • The different columns are separated by a comma (,)
    • The fourth column is filled with zeroes (0)
    • You need more than one map in the .csv file
    • You should avoid special characters, including underscores (_) in the marker names
    • You must use the period (.) as the decimal separator (no comma (,))

    Automatically creating the markers fasta file

    The .csv file will be used to create a fasta file using the following script:

    ./01_scripts/00_prepare_input_fasta_file_from_csv.sh <your_file.csv>
    

    This will produce a file named 02_data/marker.fasta.

    Preparing the reference genome

    Once you have a reference genome in fasta format, copy it here: 02_data/genome/genome.fasta and index it with bwa:

    bwa index 02_data/genome/genome.fasta
    

    Running MapComp

    Once your data has been prepared and your reference genome is indexed, running mapcomp is as easy launching the following command:

    ./mapcomp
    

    Exploring Results

    After MapComp finishes, visual plots comparing the different linkage maps will be found in 04_figures and a summary of the results in 05_results. For more detailed results, one can inspect the 03_mapped/wanted_loci.info file. This file contains the details of the marker pairs found for each species pair, and can be useful to obtain exact mapping locations of markers on the reference genome.

    Example output image from the tutorial markers and genome: Alt text

    Citing

    If you use MapComp in your research, please cite:

    Sutherland BJG, Gosselin T, Normandeau E, Lamothe M, Isabel N, Bernatchez L. Salmonid Chromosome Evolution as Revealed by a Novel Method for Comparing RADseq Linkage Maps. Genome Biol Evol (2016) 8 (12): 3600-3617. DOI: https://doi.org/10.1093/gbe/evw262

    (preprint version: bioRxiv. 2016: 1–44. doi:10.1101/039164)

    Troubleshooting

    A Google Group for MapComp is available at: https://groups.google.com/forum/#!forum/mapcomp

    License

    MapComp is licensed under the GNU General Public Licence version 3 (GPL3). See the LICENCE file for more details.

    Visit original content creator repository https://github.com/enormandeau/mapcomp
  • aboutmeinfo-telegram-bot

    aboutmeinfo-telegram-bot logo

    🤖 aboutmeinfo-telegram-bot

    v0.6.9-nightly.163 License: MIT Language: TypeScript Framework: Grammy ECMAScript: 2019 Discord Server

    About Me Info Bot: Share your social profiles and links on Telegram

    🎁 Support: Donate

    This project is free, open source and I try to provide excellent free support. Why donate? I work on this project several hours in my spare time and try to keep it up to date and working. THANK YOU!

    Donate Paypal Donate Ko-Fi Donate GitHub Sponsors Donate Patreon Donate Bitcoin Donate Ethereum

    📎 Menu

    💡 Features

    • [✔️] Easy to use
    • [✔️] MIT License
    • [✔️] Powered by Grammy Telegram API Framework
    • [✔️] Share your social media and links on Telegram
    • [✔️] Share your Instagram profile on Telegram
    • [✔️] Share your GitHub profile on Telegram
    • [✔️] Share your GitLab profile on Telegram
    • [✔️] Share your Facebook profile on Telegram
    • [✔️] Share your Twitter profile on Telegram
    • [✔️] Share your TikTok profile on Telegram
    • [✔️] Share your Twitch profile on Telegram
    • [✔️] Share your Mastodon profile on Telegram
    • [✔️] Share your PSN profile on Telegram
    • [✔️] Share your Steam profile on Telegram
    • [✔️] Share your LinkedIn profile on Telegram
    • [✔️] Share your YouTube profile on Telegram
    • [✔️] Share your Spotify playlist on Telegram
    • [✔️] Share your Discord profile on Telegram
    • [✔️] Share your OnlyFans profile on Telegram
    • [✔️] Share your Website on Telegram

    👔 Screenshot

    aboutmeinfo-telegram-bot

    🚀 Installation

    1. Add @AboutMeInfoBot to your Telegram group
    2. Run /start or /start@AboutMeInfoBot
    3. Ask social buttons of user with /about @NICKNAME

    🎮 How to set your links

    1. Send private message to @AboutMeInfoBot
    2. Run /start or /start@AboutMeInfoBot
    3. Follow instructions

    🔨 Developer Mode

    🏁 Run Project

    1. Clone this repository or download nightly, beta or stable.
    2. Write to @botfather on telegram and create new bot (save token and set bot username)
    3. Run with correct values: npm run init:token --username name_bot --token 1234:asdfghjkl
    4. Run npm install
    5. Run npm run dev
    6. Write /start on telegram bot.

    🚀 Deploy

    Deploy bot to your server and:

    1. Run with correct values: npm run init:token --token asdfghjkl
    2. Run init npm install
    3. Generate release npm run release
    4. Start bot npm run start-pm2

    📚 Documentation

    Run npm run docs

    👑 Backers and Sponsors

    Thanks to all our backers! 🙏 Donate 3$ or more on paypal, ko-fi, github or patreon and send me email with your avatar and url.

    👨‍💻 Contributing

    I ❤️ contributions! I will happily accept your pull request! (IMPORTANT: Only to nightly branch!) Translations, grammatical corrections (GrammarNazi you are welcome! Yes my English is bad, sorry), etc… Do not be afraid, if the code is not perfect we will work together 👯 and remember to insert your name in .all-contributorsrc and package.json file.

    Thanks goes to these wonderful people (emoji key):

    Patryk Rzucidło
    Patryk Rzucidło

    💻 🌍 📖 🐛
    Alì Shadman
    Alì Shadman
    💻 🌍 📖 🐛
    Adrian Castro
    Adrian Castro

    🌍
    Airscript
    Airscript

    💻 🌍 🐛

    💰 In the future, if the donations allow it, I would like to share some of the success with those who helped me the most. For me open source is share of code, share development knowledges and share donations!

    🦄 Other Projects

    💫 License

    • Code and Contributions have MIT License
    • Images and logos have CC BY-NC 4.0 License
    • Documentations and Translations have CC BY 4.0 License
    Visit original content creator repository https://github.com/ptkdev/aboutmeinfo-telegram-bot
  • sum-cli

    sum-cli

    CLI tool to extract and summarize text from a given URL. Quickly get the key points of any webpage without reading the full content.

    Requirements

    • Ollama
    • Python 3.10 or higher

    Installation

    pip install git+https://github.com/dmitriiweb/sum-cli.git

    Usage

    sum_cli --help
    sum_cli https://example.com

    Example

    $ sum_cli https://python.langchain.com/docs/tutorials/llm_chain/
    Here is a concise summary of the article within 10 sentences:
    
    LangChain is a library that enables building applications using language models. This tutorial demonstrates how to build a simple LLM application with LangChain, which translates text from English into another language. The application consists of a single LLM call and prompt templates. Prompt templates take raw user input and return data ready to pass into a language model. A chat template is created with two variables: language and text. The template is used to format the input for the language model. The application invokes the chat model on the formatted prompt, generating a response in the target language. LangSmith provides logging and tracing capabilities, allowing developers to inspect the application's flow. This tutorial covers the basics of using language models, creating prompt templates, and getting observability with LangSmith. For further learning, detailed Conceptual Guides and other resources are available.
    

    Visit original content creator repository
    https://github.com/dmitriiweb/sum-cli

  • Uber-Case-Study

    Uber Case Study

    Supply Demand Gap Analysis

    Business Understanding

    You may have some experience of travelling to and from the airport. Have you ever used Uber or any other cab service for this travel? Did you at any time face the problem of cancellation by the driver or non-availability of cars?

    Well, if these are the problems faced by customers, these very issues also impact the business of Uber. If drivers cancel the request of riders or if cars are unavailable, Uber loses out on its revenue. Let’s hear more about such problems that Uber faces during its operations.

    As an analyst, you decide to address the problem Uber is facing – driver cancellation and non-availability of cars leading to loss of potential revenue.

    Business Objective

    The aim of analysis is to identify the root cause of the problem (i.e. cancellation and non-availability of cars) and recommend ways to improve the situation. As a result of your analysis, you should be able to present to the client the root cause(s) and possible hypotheses of the problem(s) and recommend ways to improve them.

    Data Understanding

    There are six attributes associated with each request made by a customer:

    1. Request id: A unique identifier of the request
    2. Request timestamp: The date and time at which the customer made the trip request
    3. Drop timestamp: The drop-off date and time, in case the trip was completed
    4. Pickup point: The point from which the request was made
    5. Driver id: The unique identification number of the driver
    6. Status: The final status of the trip, that can be either completed, cancelled by the driver or no cars available

    Data

    https://cdn.upgrad.com/UpGrad/temp/76b3b6a4-d87d-4e82-b1c3-3f6e10b9c076/Uber%20Request%20Data.csv

    Visit original content creator repository
    https://github.com/Lakshya-Ag/Uber-Case-Study