Blog

  • hemo

    Hemo

    CircleCI GitHub Actions

    Hemo is a portmanteau (i.e. [H]anami + D[emo] = Hemo) which is designed to provide a fully working demo Hanami application as built by the Hanamismith gem.

    ⚠️ This application is meant for demonstration purposes only which means all commits are heavily rebased as new functionality is implemented. You can definitely clone this project — and is encouraged — but I wouldn’t recommend forking this project because the SHAs will be constantly changing since each commit is meant to tell a story so people can learn how this application was architected. If you do clone (or download) a copy of this application, please note you’ll have to re-clone/download with any new changes pushed to this repository.

    Features

    • Uses Hanamismith for building the initial project skeleton and application architecture.

    • Uses modern Hanami (backend) and htmx (frontend) technology to rapidly develop full featured web applications.

    • Uses modern CSS for stylesheets.

    • Provides a simple task management system for demonstration purposes where you can view, create, edit, update, and destroy tasks.

    Screencasts

    See Hanamismith for details.

    Requirements

    1. Ruby.

    2. PostgreSQL.

    3. Overmind (optional but recommended).

    Setup

    To set up the project, run:

    git clone https://github.com/bkuhlmann/hemo
    cd hemo
    bin/setup

    Usage

    For access to the console, run:

    bin/console

    To view all Rake tasks, run:

    rake -T

    To view all Hanami CLI or CLI subcommand options, run:

    bin/hanami -h
    bin/hanami db -h

    To develop — red, green, refactor — with Guard, run:

    bin/guard

    To launch the server, use any of the following:

    # With Overmind (recommended)
    overmind start --procfile Procfile.dev
    
    # Without Overmind
    bin/hanami server

    Once the server is running, you can view the app via the following URLs:

    You can also check the status (health) of the app by hitting the /up endpoint.

    Tests

    To test, run:

    bin/rake

    Credits

    Visit original content creator repository https://github.com/bkuhlmann/hemo
  • hemo

    Hemo

    CircleCI GitHub Actions

    Hemo is a portmanteau (i.e. [H]anami + D[emo] = Hemo) which is designed to provide a fully working demo Hanami application as built by the Hanamismith gem.

    ⚠️ This application is meant for demonstration purposes only which means all commits are heavily rebased as new functionality is implemented. You can definitely clone this project — and is encouraged — but I wouldn’t recommend forking this project because the SHAs will be constantly changing since each commit is meant to tell a story so people can learn how this application was architected. If you do clone (or download) a copy of this application, please note you’ll have to re-clone/download with any new changes pushed to this repository.

    Features

    • Uses Hanamismith for building the initial project skeleton and application architecture.

    • Uses modern Hanami (backend) and htmx (frontend) technology to rapidly develop full featured web applications.

    • Uses modern CSS for stylesheets.

    • Provides a simple task management system for demonstration purposes where you can view, create, edit, update, and destroy tasks.

    Screencasts

    See Hanamismith for details.

    Requirements

    1. Ruby.

    2. PostgreSQL.

    3. Overmind (optional but recommended).

    Setup

    To set up the project, run:

    git clone https://github.com/bkuhlmann/hemo
    cd hemo
    bin/setup

    Usage

    For access to the console, run:

    bin/console

    To view all Rake tasks, run:

    rake -T

    To view all Hanami CLI or CLI subcommand options, run:

    bin/hanami -h
    bin/hanami db -h

    To develop — red, green, refactor — with Guard, run:

    bin/guard

    To launch the server, use any of the following:

    # With Overmind (recommended)
    overmind start --procfile Procfile.dev
    
    # Without Overmind
    bin/hanami server

    Once the server is running, you can view the app via the following URLs:

    You can also check the status (health) of the app by hitting the /up endpoint.

    Tests

    To test, run:

    bin/rake

    Credits

    Visit original content creator repository https://github.com/bkuhlmann/hemo
  • ast

    ast

    Implementation of Adversarial Sparse Transformer for Time Series Forecasting

    https://proceedings.neurips.cc/paper/2020/file/c6b8c8d762da15fa8dbbdfb6baf9e260-Paper.pdf

    Notes:

    • Performance is just about 0.10 on gloss 50 metric on elect 1d dataset.
    • I arranged layer normalization after each layer calculation and dropouts are just before layer. Otherwise, it does not work for me. It seems, layer should see/have those dropout zeros without them being leveled by layer normalization.
    • I’m getting similar performance with 2 or 3 layers, 4 heads and 128 width. It is likely that changing width and number of heads, could improve prformance. it is just taking in a range of 1000 epochs for me.
    • I noticed that qloss training results in high error for some items. I think rmse error would pay more attention to outliers
    • With adversarial Training I go 0.10 on q50 compare to 0.11 without it. It is just small difference
    • Prediction is done. Obviously, dataset should be continuation of the same time series! Probably, dataset should have TRAIN, VALIDATION, maybe TEST/PREDICT sets.
    • I do not mask in decoder since there is no labels , it is non-auto-regressive
    • Discriminaror performance is like 0.75. It should be closer to .50 I guess!

    To do/try:

    • Sparse function is not implemented. Sparsemax function is available with Tensorfow 2 – tensorflow_addons. It may improve performance to make these zero weights
    • I want to see performance with more hidden layer width and more heads

    Commands:

    Prepare datasets:

    download LD2011_2014.txt. I use data subfolder in the sample commands

    python prepare_data.py –lookback_history=168 –estimate_length=24

    Train:

    python training.py –action=TRAIN –output_dir=checkpoints-27 –lookback_history=168 –estimate_length=24 –train_epochs=1500 –learning_rate=1e-4 –minimal_rate=1e-5 –decay_steps=50000 –warmup_steps=50000 –clip_gradients=-1.0 –hidden_size=128 –feedforward_size=128 –embedding_size=20 –discriminator_lambda=0.1 –num_attention_heads=4 –num_hidden_layers=2 –dropout_prob=0.3 –num_series=370 –training_set_size=321598 –train_file=data/train.tfrecords –test_file=data/test.tfrecords –predict_file=data/test.tfrecords –train_scaler_file=data/train_scaler.joblib –test_scaler_file=data/test_scaler.joblib –predict_scaler_file=data/test_scaler.joblib –batch_size=64

    Generator loss output sample

    Discriminator loss output sample

    Discriminator accuracy output sample

    Evaluate:

    python training.py –action=EVALUATE –output_dir=checkpoints-27 –lookback_history=168 –estimate_length=24 –train_epochs=1500 –learning_rate=1e-4 –minimal_rate=1e-5 –decay_steps=50000 –warmup_steps=50000 –clip_gradients=-1.0 –hidden_size=128 –feedforward_size=128 –embedding_size=20 –discriminator_lambda=0.1 –num_attention_heads=4 –num_hidden_layers=2 –dropout_prob=0.3 –num_series=370 –training_set_size=321598 –train_file=data/train.tfrecords –test_file=data/test.tfrecords –predict_file=data/test.tfrecords –train_scaler_file=data/train_scaler.joblib –test_scaler_file=data/test_scaler.joblib –predict_scaler_file=data/test_scaler.joblib –batch_size=64; cat output.csv

    Predict:

    python training.py –action=PREDICT –output_dir=checkpoints-27 –lookback_history=168 –estimate_length=24 –train_epochs=1500 –learning_rate=1e-4 –minimal_rate=1e-5 –decay_steps=50000 –warmup_steps=50000 –clip_gradients=-1.0 –hidden_size=128 –feedforward_size=128 –embedding_size=20 –discriminator_lambda=0.1 –num_attention_heads=4 –num_hidden_layers=2 –dropout_prob=0.3 –num_series=370 –training_set_size=321598 –train_file=data/train.tfrecords –test_file=data/test.tfrecords –predict_file=data/test.tfrecords –train_scaler_file=data/train_scaler.joblib –test_scaler_file=data/test_scaler.joblib –predict_scaler_file=data/test_scaler.joblib –batch_size=64; less output.csv

    Peformance:

    Adversarial with parameters as provided in samples:

    q50 during trainig:

    output sample

    Final Testing set:

    mae: 0.097607 mbe: -0.022780 mape: 16.283786 smape: 4.758574 mse: 0.052448 rmse: 0.229016 q50: 0.099462

    Visit original content creator repository https://github.com/mangushev/ast
  • ast

    ast

    Implementation of Adversarial Sparse Transformer for Time Series Forecasting

    https://proceedings.neurips.cc/paper/2020/file/c6b8c8d762da15fa8dbbdfb6baf9e260-Paper.pdf

    Notes:

    • Performance is just about 0.10 on gloss 50 metric on elect 1d dataset.
    • I arranged layer normalization after each layer calculation and dropouts are just before layer. Otherwise, it does not work for me. It seems, layer should see/have those dropout zeros without them being leveled by layer normalization.
    • I’m getting similar performance with 2 or 3 layers, 4 heads and 128 width. It is likely that changing width and number of heads, could improve prformance. it is just taking in a range of 1000 epochs for me.
    • I noticed that qloss training results in high error for some items. I think rmse error would pay more attention to outliers
    • With adversarial Training I go 0.10 on q50 compare to 0.11 without it. It is just small difference
    • Prediction is done. Obviously, dataset should be continuation of the same time series! Probably, dataset should have TRAIN, VALIDATION, maybe TEST/PREDICT sets.
    • I do not mask in decoder since there is no labels , it is non-auto-regressive
    • Discriminaror performance is like 0.75. It should be closer to .50 I guess!

    To do/try:

    • Sparse function is not implemented. Sparsemax function is available with Tensorfow 2 – tensorflow_addons. It may improve performance to make these zero weights
    • I want to see performance with more hidden layer width and more heads

    Commands:

    Prepare datasets:

    download LD2011_2014.txt. I use data subfolder in the sample commands

    python prepare_data.py –lookback_history=168 –estimate_length=24

    Train:

    python training.py –action=TRAIN –output_dir=checkpoints-27 –lookback_history=168 –estimate_length=24 –train_epochs=1500 –learning_rate=1e-4 –minimal_rate=1e-5 –decay_steps=50000 –warmup_steps=50000 –clip_gradients=-1.0 –hidden_size=128 –feedforward_size=128 –embedding_size=20 –discriminator_lambda=0.1 –num_attention_heads=4 –num_hidden_layers=2 –dropout_prob=0.3 –num_series=370 –training_set_size=321598 –train_file=data/train.tfrecords –test_file=data/test.tfrecords –predict_file=data/test.tfrecords –train_scaler_file=data/train_scaler.joblib –test_scaler_file=data/test_scaler.joblib –predict_scaler_file=data/test_scaler.joblib –batch_size=64

    Generator loss output sample

    Discriminator loss output sample

    Discriminator accuracy output sample

    Evaluate:

    python training.py –action=EVALUATE –output_dir=checkpoints-27 –lookback_history=168 –estimate_length=24 –train_epochs=1500 –learning_rate=1e-4 –minimal_rate=1e-5 –decay_steps=50000 –warmup_steps=50000 –clip_gradients=-1.0 –hidden_size=128 –feedforward_size=128 –embedding_size=20 –discriminator_lambda=0.1 –num_attention_heads=4 –num_hidden_layers=2 –dropout_prob=0.3 –num_series=370 –training_set_size=321598 –train_file=data/train.tfrecords –test_file=data/test.tfrecords –predict_file=data/test.tfrecords –train_scaler_file=data/train_scaler.joblib –test_scaler_file=data/test_scaler.joblib –predict_scaler_file=data/test_scaler.joblib –batch_size=64; cat output.csv

    Predict:

    python training.py –action=PREDICT –output_dir=checkpoints-27 –lookback_history=168 –estimate_length=24 –train_epochs=1500 –learning_rate=1e-4 –minimal_rate=1e-5 –decay_steps=50000 –warmup_steps=50000 –clip_gradients=-1.0 –hidden_size=128 –feedforward_size=128 –embedding_size=20 –discriminator_lambda=0.1 –num_attention_heads=4 –num_hidden_layers=2 –dropout_prob=0.3 –num_series=370 –training_set_size=321598 –train_file=data/train.tfrecords –test_file=data/test.tfrecords –predict_file=data/test.tfrecords –train_scaler_file=data/train_scaler.joblib –test_scaler_file=data/test_scaler.joblib –predict_scaler_file=data/test_scaler.joblib –batch_size=64; less output.csv

    Peformance:

    Adversarial with parameters as provided in samples:

    q50 during trainig:

    output sample

    Final Testing set:

    mae: 0.097607 mbe: -0.022780 mape: 16.283786 smape: 4.758574 mse: 0.052448 rmse: 0.229016 q50: 0.099462

    Visit original content creator repository https://github.com/mangushev/ast
  • dilithium

    Dilithium

    Build Status Coverage Status

    This repository contains the official reference implementation of the Dilithium signature scheme, and an optimized implementation for x86 CPUs supporting the AVX2 instruction set. Dilithium is standardized as FIPS 204.

    Build instructions

    The implementations contain several test and benchmarking programs and a Makefile to facilitate compilation.

    Prerequisites

    Some of the test programs require OpenSSL. If the OpenSSL header files and/or shared libraries do not lie in one of the standard locations on your system, it is necessary to specify their location via compiler and linker flags in the environment variables CFLAGS, NISTFLAGS, and LDFLAGS.

    For example, on macOS you can install OpenSSL via Homebrew by running

    brew install openssl

    Then, run

    export CFLAGS="-I/opt/homebrew/opt/openssl@1.1/include"
    export NISTFLAGS="-I/opt/homebrew/opt/openssl@1.1/include"
    export LDFLAGS="-L/opt/homebrew/opt/openssl@1.1/lib"

    before compilation to add the OpenSSL header and library locations to the respective search paths.

    Test programs

    To compile the test programs on Linux or macOS, go to the ref/ or avx2/ directory and run

    make

    This produces the executables

    test/test_dilithium$ALG
    test/test_vectors$ALG

    where $ALG ranges over the parameter sets 2, 3, and 5.

    • test_dilithium$ALG tests 10000 times to generate keys, sign a random message of 59 bytes and verify the produced signature. Also, the program will try to verify wrong signatures where a single random byte of a valid signature was randomly distorted. The program will abort with an error message and return -1 if there was an error. Otherwise it will output the key and signature sizes and return 0.
    • test_vectors$ALG performs further tests of internal functions and prints deterministically generated test vectors for several intermediate values that occur in the Dilithium algorithms. Namely, a 48 byte seed, the matrix A corresponding to the first 32 bytes of seed, a short secret vector s corresponding to the first 32 bytes of seed and nonce 0, a masking vector y corresponding to the seed and nonce 0, the high bits w1 and the low bits w0 of the vector w = Ay, the power-of-two rounding t1 of w and the corresponding low part t0, and the challenge c for the seed and w1. This program is meant to help to ensure compatibility of independent implementations.

    Benchmarking programs

    For benchmarking the implementations, we provide speed test programs for x86 CPUs that use the Time Step Counter (TSC) or the actual cycle counter provided by the Performance Measurement Counters (PMC) to measure performance. To compile the programs run

    make speed

    This produces the executables

    test/test_speed$ALG

    for all parameter sets $ALG as above. The programs report the median and average cycle counts of 10000 executions of various internal functions and the API functions for key generation, signing and verification. By default the Time Step Counter is used. If instead you want to obtain the actual cycle counts from the Performance Measurement Counters export CFLAGS="-DUSE_RDPMC" before compilation.

    Please note that the reference implementation in ref/ is not optimized for any platform, and, since it prioritises clean code, is significantly slower than a trivially optimized but still platform-independent implementation. Hence benchmarking the reference code does not provide representative results.

    Our Dilithium implementations are contained in the SUPERCOP benchmarking framework. See here for current cycle counts on an Intel KabyLake CPU.

    Randomized signing

    By default our code implements Dilithium’s hedged signing mode. To change this to the deterministic signing mode, undefine the DILITHIUM_RANDOMIZED_SIGNING preprocessor macro at compilation by either commenting the line

    #define DILITHIUM_RANDOMIZED_SIGNING

    in config.h, or adding -UDILITHIUM_RANDOMIZED_SIGNING to the compiler flags in the environment variable CFLAGS.

    Shared libraries

    All implementations can be compiled into shared libraries by running

    make shared

    For example in the directory ref/ of the reference implementation, this produces the libraries

    libpqcrystals_dilithium$ALG_ref.so

    for all parameter sets $ALG, and the required symmetric crypto library

    libpqcrystals_fips202_ref.so
    

    All global symbols in the libraries lie in the namespaces pqcrystals_dilithium$ALG_ref and libpqcrystals_fips202_ref. Hence it is possible to link a program against all libraries simultaneously and obtain access to all implementations for all parameter sets. The corresponding API header file is ref/api.h, which contains prototypes for all API functions and preprocessor defines for the key and signature lengths.

    Visit original content creator repository https://github.com/pq-crystals/dilithium
  • dilithium

    Dilithium

    Build Status Coverage Status

    This repository contains the official reference implementation of the Dilithium signature scheme, and an optimized implementation for x86 CPUs supporting the AVX2 instruction set. Dilithium is standardized as FIPS 204.

    Build instructions

    The implementations contain several test and benchmarking programs and a Makefile to facilitate compilation.

    Prerequisites

    Some of the test programs require OpenSSL. If the OpenSSL header files and/or shared libraries do not lie in one of the standard locations on your system, it is necessary to specify their location via compiler and linker flags in the environment variables CFLAGS, NISTFLAGS, and LDFLAGS.

    For example, on macOS you can install OpenSSL via Homebrew by running

    brew install openssl

    Then, run

    export CFLAGS="-I/opt/homebrew/opt/openssl@1.1/include"
    export NISTFLAGS="-I/opt/homebrew/opt/openssl@1.1/include"
    export LDFLAGS="-L/opt/homebrew/opt/openssl@1.1/lib"

    before compilation to add the OpenSSL header and library locations to the respective search paths.

    Test programs

    To compile the test programs on Linux or macOS, go to the ref/ or avx2/ directory and run

    make

    This produces the executables

    test/test_dilithium$ALG
    test/test_vectors$ALG

    where $ALG ranges over the parameter sets 2, 3, and 5.

    • test_dilithium$ALG tests 10000 times to generate keys, sign a random message of 59 bytes and verify the produced signature. Also, the program will try to verify wrong signatures where a single random byte of a valid signature was randomly distorted. The program will abort with an error message and return -1 if there was an error. Otherwise it will output the key and signature sizes and return 0.
    • test_vectors$ALG performs further tests of internal functions and prints deterministically generated test vectors for several intermediate values that occur in the Dilithium algorithms. Namely, a 48 byte seed, the matrix A corresponding to the first 32 bytes of seed, a short secret vector s corresponding to the first 32 bytes of seed and nonce 0, a masking vector y corresponding to the seed and nonce 0, the high bits w1 and the low bits w0 of the vector w = Ay, the power-of-two rounding t1 of w and the corresponding low part t0, and the challenge c for the seed and w1. This program is meant to help to ensure compatibility of independent implementations.

    Benchmarking programs

    For benchmarking the implementations, we provide speed test programs for x86 CPUs that use the Time Step Counter (TSC) or the actual cycle counter provided by the Performance Measurement Counters (PMC) to measure performance. To compile the programs run

    make speed

    This produces the executables

    test/test_speed$ALG

    for all parameter sets $ALG as above. The programs report the median and average cycle counts of 10000 executions of various internal functions and the API functions for key generation, signing and verification. By default the Time Step Counter is used. If instead you want to obtain the actual cycle counts from the Performance Measurement Counters export CFLAGS="-DUSE_RDPMC" before compilation.

    Please note that the reference implementation in ref/ is not optimized for any platform, and, since it prioritises clean code, is significantly slower than a trivially optimized but still platform-independent implementation. Hence benchmarking the reference code does not provide representative results.

    Our Dilithium implementations are contained in the SUPERCOP benchmarking framework. See here for current cycle counts on an Intel KabyLake CPU.

    Randomized signing

    By default our code implements Dilithium’s hedged signing mode. To change this to the deterministic signing mode, undefine the DILITHIUM_RANDOMIZED_SIGNING preprocessor macro at compilation by either commenting the line

    #define DILITHIUM_RANDOMIZED_SIGNING

    in config.h, or adding -UDILITHIUM_RANDOMIZED_SIGNING to the compiler flags in the environment variable CFLAGS.

    Shared libraries

    All implementations can be compiled into shared libraries by running

    make shared

    For example in the directory ref/ of the reference implementation, this produces the libraries

    libpqcrystals_dilithium$ALG_ref.so

    for all parameter sets $ALG, and the required symmetric crypto library

    libpqcrystals_fips202_ref.so
    

    All global symbols in the libraries lie in the namespaces pqcrystals_dilithium$ALG_ref and libpqcrystals_fips202_ref. Hence it is possible to link a program against all libraries simultaneously and obtain access to all implementations for all parameter sets. The corresponding API header file is ref/api.h, which contains prototypes for all API functions and preprocessor defines for the key and signature lengths.

    Visit original content creator repository https://github.com/pq-crystals/dilithium
  • SymbolGrid

    Ignite logo

    SymbolGrid

    Getting Started

    Static Badge GitHub last commit (branch) GitHub contributors first-timers-only

    • Read the Code of Conduct
    • Read the CONTRIBUTING.md guidelines
    • Download Xcode 15 or later
    • Browse the open issues and comment which you would like to work on
      • It is only one person per issue, except where noted.
    • Fork this repo
    • Clone the repo to your machine (do not open Xcode yet)
    • In the same folder that contains the SymbolGrid.xcconfig.template, run this command, in Terminal, to create a new Xcode configuration file (which properly sets up the signing information)
    cp SymbolGrid.xcconfig.template SymbolGrid.xcconfig
    • Open Xcode

    • In the SymbolGrid.xcconfig file, fill in your DEVELOPMENT_TEAM and PRODUCT_BUNDLE_IDENTIFIER.

      • You can find this by logging into the Apple Developer Portal
      • This works with both free or paid Apple Developer accounts. Do NOT run this app on a real device due to issues with the Sign in With Apple capability.
    DEVELOPMENT_TEAM = ABC123
    PRODUCT_BUNDLE_IDENTIFIER = com.mycompany.symbols
    
    • Build the project ✅

    • Checkout a new branch (from the dev branch) to work on an issue

    Contributing

    To start contributing, review CONTRIBUTING.md. New contributors are always welcome to support this project.

    👀 Please be sure to comment on an issue you’d like to work on and Dalton Alexandre, the maintainer of this project, will assign it to you! You can only work on ONE issue at a time.

    Checkout any issue labeled hacktoberfest to start contributing.

    Important

    View the GitHub Discussions for the latest information about the repo.

    Issue Labels

    • first-timers-only is only for someone who has not contributed to the repo yet! (and is new to open source and iOS development)
    • good-first-issue is an issue that’s beginner level

    Please choose an appropriate issue for your skill level

    Contributors

    Made with contrib.rocks.

    License

    This project is licensed under Apache 2.0.

    Star History

    Star History Chart
    Visit original content creator repository https://github.com/dl-alexandre/SymbolGrid
  • easytable

    npm vue3.2 NPM downloads license

    @easytable/vue

    Warning

    本仓库迁移自 vue-easytable Vue.js 2.x ,基于 Vue.js 3.x 重构中,目前基本完成。

    English | 中文

    介绍

    一个强大的 vue3.x 表格组件。你可以将它用做数据表、微软 excel 或者 goole sheet. 支持虚拟滚动、单元格编辑等功能。

    Important

    如果您正在使用 Vue2.x ,请使用 vue-easytable 组件库。

    特点

    • 采用虚拟滚动技术,支持 30 万行数据展示
    • 永久免费。当然你也可以选择捐赠

    API & 文档

    功能支持

    基础组件

    Table 组件

    如果没有你想要的的功能 ,请告诉我们

    安装

    pnpm install @easytable/vue

    or

    yarn add @easytable/vue

    使用

    Write the following in main.js:

    import { createApp } from 'vue';
    import '@easytable/vue/libs/theme-default/index.css';
    import { useVeTable } from '@easytable/vue';
    
    createApp({
      render: (h) => h(App),
    })
    .use(useVeTable())
    .mount('#app');

    Example:

    <template>
      <ve-table :columns="columns" :table-data="tableData" />
    </template>
    
    <script>
      export default {
        data() {
          return {
            columns: [
              { field: "name", key: "a", title: "Name", align: "center" },
              { field: "date", key: "b", title: "Date", align: "left" },
              { field: "hobby", key: "c", title: "Hobby", align: "right" },
              { field: "address", key: "d", title: "Address" },
            ],
            tableData: [
              {
                name: "John",
                date: "1900-05-20",
                hobby: "coding and coding repeat",
                address: "No.1 Century Avenue, Shanghai",
              },
              {
                name: "Dickerson",
                date: "1910-06-20",
                hobby: "coding and coding repeat",
                address: "No.1 Century Avenue, Beijing",
              },
              {
                name: "Larsen",
                date: "2000-07-20",
                hobby: "coding and coding repeat",
                address: "No.1 Century Avenue, Chongqing",
              },
              {
                name: "Geneva",
                date: "2010-08-20",
                hobby: "coding and coding repeat",
                address: "No.1 Century Avenue, Xiamen",
              },
              {
                name: "Jami",
                date: "2020-09-20",
                hobby: "coding and coding repeat",
                address: "No.1 Century Avenue, Shenzhen",
              },
            ],
          };
        },
      };
    </script>

    开发计划

    正在做的事情

    支持环境

    • 现代浏览器和 IE11 及以上
    IE / Edge
    IE / Edge
    Firefox
    Firefox
    Chrome
    Chrome
    Safari
    Safari
    Opera
    Opera
    IE11, Edge last 2 versions last 2 versions last 2 versions last 2 versions

    如何贡献

    如果你希望参与贡献,欢迎 Pull Request

    Star History

    Star History Chart

    贡献者们

    感谢原组件库作者 huangshuwei

    同时感谢以下小伙伴们做出的贡献 🙏

    License

    http://www.opensource.org/licenses/mit-license.php

    Visit original content creator repository https://github.com/kohaiy/easytable
  • easytable

    npm vue3.2 NPM downloads license

    @easytable/vue

    Warning

    本仓库迁移自 vue-easytable Vue.js 2.x ,基于 Vue.js 3.x 重构中,目前基本完成。

    English | 中文

    介绍

    一个强大的 vue3.x 表格组件。你可以将它用做数据表、微软 excel 或者 goole sheet. 支持虚拟滚动、单元格编辑等功能。

    Important

    如果您正在使用 Vue2.x ,请使用 vue-easytable 组件库。

    特点

    • 采用虚拟滚动技术,支持 30 万行数据展示
    • 永久免费。当然你也可以选择捐赠

    API & 文档

    功能支持

    基础组件

    Table 组件

    如果没有你想要的的功能 ,请告诉我们

    安装

    pnpm install @easytable/vue

    or

    yarn add @easytable/vue

    使用

    Write the following in main.js:

    import { createApp } from 'vue';
    import '@easytable/vue/libs/theme-default/index.css';
    import { useVeTable } from '@easytable/vue';
    
    createApp({
      render: (h) => h(App),
    })
    .use(useVeTable())
    .mount('#app');

    Example:

    <template>
      <ve-table :columns="columns" :table-data="tableData" />
    </template>
    
    <script>
      export default {
        data() {
          return {
            columns: [
              { field: "name", key: "a", title: "Name", align: "center" },
              { field: "date", key: "b", title: "Date", align: "left" },
              { field: "hobby", key: "c", title: "Hobby", align: "right" },
              { field: "address", key: "d", title: "Address" },
            ],
            tableData: [
              {
                name: "John",
                date: "1900-05-20",
                hobby: "coding and coding repeat",
                address: "No.1 Century Avenue, Shanghai",
              },
              {
                name: "Dickerson",
                date: "1910-06-20",
                hobby: "coding and coding repeat",
                address: "No.1 Century Avenue, Beijing",
              },
              {
                name: "Larsen",
                date: "2000-07-20",
                hobby: "coding and coding repeat",
                address: "No.1 Century Avenue, Chongqing",
              },
              {
                name: "Geneva",
                date: "2010-08-20",
                hobby: "coding and coding repeat",
                address: "No.1 Century Avenue, Xiamen",
              },
              {
                name: "Jami",
                date: "2020-09-20",
                hobby: "coding and coding repeat",
                address: "No.1 Century Avenue, Shenzhen",
              },
            ],
          };
        },
      };
    </script>

    开发计划

    正在做的事情

    支持环境

    • 现代浏览器和 IE11 及以上
    IE / Edge
    IE / Edge
    Firefox
    Firefox
    Chrome
    Chrome
    Safari
    Safari
    Opera
    Opera
    IE11, Edge last 2 versions last 2 versions last 2 versions last 2 versions

    如何贡献

    如果你希望参与贡献,欢迎 Pull Request

    Star History

    Star History Chart

    贡献者们

    感谢原组件库作者 huangshuwei

    同时感谢以下小伙伴们做出的贡献 🙏

    License

    http://www.opensource.org/licenses/mit-license.php

    Visit original content creator repository https://github.com/kohaiy/easytable
  • Evoxt

    Evoxt Coupon Codes and 2024 Japan VPS Latest Deals Compilation Summary

    Evoxt Introduction

    Evoxt, a VPS hosting provider, recently announced the launch of its new Japan VPS plans at competitive prices. Starting at $2.99/month, these VPS packages offer 512MB RAM, 1 CPU core, 5GB SSD storage, and 250GB monthly traffic with 1Gbps bandwidth. The servers are KVM virtualized with pure NVMe SSD arrays, offering high performance and stability. The Japan data center is located in Osaka and uses SoftBank lines, providing excellent connectivity.

    image

    Evoxt Official Website Address

    https://www.evoxt.com/

    Evoxt Promotions

    The following table outlines the various VPS packages available from Evoxt, detailing the memory, CPU, NVMe storage, monthly traffic, and prices. These plans support major Linux distributions, as well as Windows Server 2012, 2016 (both Chinese and English versions), and 2022.

    Memory CPU NVMe Storage Traffic Price Purchase Link
    512MB 1 Core 5GB 250GB/month $2.99/month Link
    1GB 1 Core 10GB 250GB/month $4.99/month Link
    2GB 1 Core 20GB 500GB/month $5.99/month Link
    2GB 2 Cores 20GB 500GB/month $6.95/month Link
    4GB 2 Cores 30GB 1TB/month $11.99/month Link
    4GB 4 Cores 30GB 1TB/month $14.99/month Link
    8GB 4 Cores 60GB 2TB/month $23.99/month Link
    8GB 8 Cores 60GB 2TB/month $29.99/month Link
    16GB 8 Cores 80GB 3TB/month $47.99/month Link
    16GB 16 Cores 80GB 3TB/month $60.95/month Link
    32GB 16 Cores 100GB 5TB/month $95.99/month Link

    Evoxt Reviews

    Evoxt is known for offering affordable VPS solutions with a variety of configurations. Its new Japan VPS plans are competitively priced and use high-frequency CPUs, SSD RAID10 arrays, and 1Gbps bandwidth, making them suitable for various use cases. The Osaka data center provides excellent connectivity through SoftBank lines, ensuring reliable and stable performance. Whether you’re hosting a website or running applications, Evoxt’s plans can meet a range of needs.

    Visit original content creator repository https://github.com/pw29aprile67/Evoxt