current position:Home>Sentry developer contribution Guide - back end services (Python / go / rust / nodejs)

Sentry developer contribution Guide - back end services (Python / go / rust / nodejs)

2022-02-02 12:59:14 For less

The content is compiled from the official development documents



  • Service management (devservices)
    • Check the log of the service
    • by redispostgres and clickhouse function CLI client
    • Remove container state
  • Port allocation
    • Find out what's running on your machine
  • asynchronous Worker
    • Registration task
    • function Worker
    • start-up Cron process
    • To configure Broker
      • Redis
      • RabbitMQ
  • Email
    • Departure Email
    • Inbound Email
      • Mailgun
  • Node storage
    • Django Back end
    • Custom backend
  • File store
    • File system backend
    • Google Cloud Storage back end
    • Amazon S3 Back end
    • MinIO S3 Back end
  • Time series storage
    • RedisSnuba Back end ( recommend )
    • Dummy Back end
    • Redis Back end
  • Write Buffer
    • To configure
      • Redis
  • indicators
    • Statsd Back end
    • Datadog Back end
    • DogStatsD Back end
    • Logging Back end
  • The quota
    • Event quota
      • To configure
      • System wide rate limit
      • User based rate limits
      • Project based rate limits
    • Notification Rate limit
      • To configure
  • Notification Abstract
    • To configure
    • Back end
      • Dummy Back end
      • Redis Back end
      • Sample configuration
  • Relay
  • Snuba
  • Back end Chart Rendering
    • stay Sentry The back end uses Chartcuterie
    • To configure chart To render
      • Service initialization
      • add to / Delete chart type
    • Run in development Chartcuterie
      • Update locally chart type
  • working principle
    • Chartcuterie start-up
    • come from Sentry Of Render call

Service management (devservices)

Sentry by Docker Provides an abstraction , To run the required services in development , be called devservices.

Usage: sentry devservices [OPTIONS] COMMAND [ARGS]...

  Manage dependent development services required for Sentry.

  Do not use in production!

  --help  Show this message and exit.

  attach  Run a single devservice in foreground, as...
  down    Shut down all services.
  rm      Delete all services and associated data.
  up      Run/update dependent services.

Check the log of the service

# Follow snuba logs
docker logs -f sentry_snuba

by redis、postgres and clickhouse function CLI client

# redis
docker exec -it sentry_redis redis-cli

# clickhouse
docker exec -it sentry_clickhouse clickhouse-client

# psql
docker exec -it sentry_postgres psql -U postgres

Remove container state

If you really screw up your container or roll , You can use devservices rm restart .

#  Delete all data associated with all services ( Containers 、 Volumes and networks )
sentry devservices rm

for example , Suppose we managed to break down postgres database , And you want to reset postgres data , You can do the following :

#  Delete all data associated with a single service ( Containers 、 Volumes and networks )
sentry devservices rm postgres

Port allocation

Here are Sentry The port or development setting used by the service Sentry A simple list of any dependencies of the service . It has two purposes :

  • Find out why you use ports on your work machine and which process to kill to make it idle .
  • Find out which ports can be safely assigned to new services .
Port Service Description
9000 Clickhouse Devservice clickhouse. Snuba The database of .
8123 Clickhouse
9009 Clickhouse
3021 Symbolicator Devservice symbolicator. Used to handle stack traces .
1218 Snuba Devservice snuba. Used to search for events .
9092 Kafka Devservice kafka. be used for relay-sentry Communication and optional for sentry-snuba signal communication
6379 Redis Devservice redis ( Or maybe through rustier Set in the Homebrew install ), Responsible for caching 、relay Project configuration and Celery queue
5432 Postgres Devservice postgres( Or maybe through rustier Set in the Homebrew install )
7899 Relay Devservice relay. by SDK Provides the to send events to API( Also known as event ingestion event ingestion).Webpack stay 8000 Port reverse proxy to this server . Use sentry devserver start-up / stop it .
8000 Sentry Dev Sentry API + front end .Webpack Listen on this port and proxy API request Django app
8001 uWSGI Use sentry devserver start-up / stop it . by Django app/API Provide services .Webpack stay 8000 Port reverse proxy to this server .
7999 Sentry frontend prod proxy Used to test for prod API The local UI change
8000 Develop docs Websites around this document . And Sentry Dev The conflict of .
3000 User docs User oriented documents . If Relay stay devservice External operation , May be related to Relay Conflict .
9001 Sentry Dev Styleguide server function sentry devserver --styleguide Time binding
9000 sentry run web sentry run web Traditional default port , Change to 9001 To avoid contact with Clickhouse Conflict .
9001 sentry run web No, webpack or Relay Quasi system front end .Sentry Dev Maybe better . And Sentry Dev Styleguide server Conflict .
8000 Relay mkdocs documentation At some point , This will be incorporated into our existing document repository . And Sentry Dev The conflict of .

Find out what's running on your machine

  • Use lsof -nP -i4 | grep LISTEN stay macOS Find the occupied port on .
  • Docker for Mac Of Dashboard UI Show what you are running docker Containers / Development services And the distribution of port and start-up / stop it Options .

asynchronous Worker

Sentry With a built-in queue , Process tasks more asynchronously .
for example , When an event enters rather than immediately writes it to the database , It sends a message to the queue job, So that the request can be returned immediately , And backstage worker Will actually process and save the data .

Sentry rely on Celery Library to manage worker.

  • ttps://

Registration task

Sentry Configure tasks with special decorators , Enables us to control callable objects more explicitly .

from sentry.tasks.base import instrumented_task

    default_retry_delay=60 * 5,
def do_work(kind_of_work, **kwargs):
    # ...

There are several important points :

  • _ must _ Statement task name .

    Task The name is Celery How to identify messages ( request ) And which functions and worker threads are needed to process these messages .
    If task There is no name ,celery Will be taken from modular and function Name derives a name ,
    This associates the name with the location of the code , And more vulnerable to future code maintenance .

  • Task Must accept \*\*kwargs To handle scrolling compatibility .

    This ensures task Will accept any message that happens to be in the queue , Instead of failing because of unknown parameters .
    It helps roll back changes , Deployment is not immediate , And messages may be generated using multiple versions of parameters .

    Although this allows scrolling forward and backward in the event of incomplete task failure ,
    However, when changing parameters, you must still pay attention to worker Processing has used Parameters and new Parameter message .
    This does reduce the number of changes required in this migration , And for operator Provides more flexibility ,
    But because of Unknown parameter Caused by Lost message Still Unacceptable Of .

  • Task _ should _ Automatically retry on failure .

  • Task Parameters _ should _ It's primitive and small .

    Task Parameters are serialized to pass broker In the message sent ,worker They need to be deserialized again .
    Performing this operation on complex types is fragile , Should avoid . for example . Prefer to ID Pass to task,
    The ID It can be used to load data from the cache instead of the data itself .
    Task Parameters are serialized to pass broker In the message sent ,worker They need to be deserialized again .
    Performing this operation on complex types is fragile , Should avoid .
    for example , Prefer to task Pass on ID, The ID Can be used to load data from the cache , Not the data itself .

    Similarly , In order to keep the news broker and worker Effective operation ,
    Serializing large values into a message results in The big news A large queue And more ( back ) Serialization overhead , therefore Should avoid .

  • Must be task Add new modules to CELERY_IMPORTS.

    Celery worker According to the name lookup task,
    Only when worker This can only be done when importing modules with decorative task functions ,
    Because this is by name register task The content of . therefore , Each contains task All modules must be added to src/sentry/conf/ Medium CELERY_IMPORTS Setting up .

function Worker

have access to Sentry CLI function Worker.

$ sentry run worker

start-up Cron process

Sentry adopt cron Schedule routine job

SENTRY_CONF=/etc/sentry sentry run cron

To configure Broker

Sentry Support two main broker, According to your workload Adjustment :RabbitMQ and Redis.


Default broker yes Redis, Can work in most situations . Use Redis The main limitation of is that all pending work must be placed in memory .

BROKER_URL = "redis://localhost:6379/0"

If your Redis The connection requires a password for authentication , You need to use the following format :

BROKER_URL = "redis://:[email protected]:6379/0"


If you are in high workload Run under , Or worry about what will be dealt with workload Put it in memory , that RabbitMQ It's supporting Sentry worker The ideal choice for .

BROKER_URL = "amqp://guest:[email protected]:5672/sentry"


Sentry Provide right Departure and Pass in Email support .

The use of inbound e-mail is quite limited , At present, it only supports the processing of error and note Reply to the notice .

Departure Email

You need to configure... For outbound email SMTP provider .

TODO: writing mail Preview backend .

stay `config.yml` In a statement .

Back end for sending e-mail . The options are smtpconsole and dummy.

The default is smtp. If you want to disable email delivery , Please use dummy.

stay `config.yml` In a statement .

From header E-mail address used for outbound e-mail in .

The default is [email protected]. It is strongly recommended that you change this value to ensure reliable e-mail delivery .
stay `config.yml` In a statement .

be used for SMTP The host name of the connection .

The default is localhost.

stay `config.yml` In a statement .

be used for SMTP The connection port of the connection .

The default is 25.

stay `config.yml` In a statement .

Use SMTP The user name used by the server for authentication .

The default is (empty).

stay `config.yml` In a statement .

Use SMTP The password used by the server for authentication .

The default is (empty).

stay `config.yml` In a statement .

Sentry Connecting to SMTP Whether the server should use SSL?

The default is false.

stay `config.yml` In a statement .

Sentry Connecting to SMTP The server should use TLS Do you ?

The default is false.

stay `config.yml` In a statement .

this Sentry The mailing list namespace of the e-mail sent by the server . This should be the domain you own ( Usually with mail.from The domain part of the configuration parameter value is the same ) or localhost.

Inbound Email

For configuration , You can choose from different backend .


First select a domain to handle inbound e-mail . We found that if you maintain a domain separate from anything else , This is the simplest .
In our example , We will choose You need to according to Mailgun The document configures... For the given domain DNS Record .

stay mailgun Create a new route in :

Filter Expression:
  Sentry inbound handler

Configure... With the appropriate settings Sentry

#  Your  Mailgun API key( Used to validate the incoming  webhook)
mail.mailgun-api-key: ""

#  take  SMTP hostname  Set the inbound domain configured for you 
mail.reply-hostname: ""

#  notice  Sentry  Send the appropriate header to enable 
#  Received a reply 
mail.enable-replies: true

this is it ! You can now respond to active notifications about errors via the email client .

Node storage

Sentry A file named ‘nodestore’ The abstraction of , Used to store key/value blob.

The default backend just treats them as gzipped blob Stored in the default database ‘nodestore_node’ In the table .

Django Back end

Django The back-end using gzipped json blob-as-text The schema stores all data in ‘nodestore_node’ In the table .

The backend does not offer any options , So just set it to an empty dictionary .

SENTRY_NODESTORE = 'sentry.nodestore.django.DjangoNodeStorage'

Custom backend

If you have a favorite data storage solution , It only needs to run under some rules , It can be compared with Sentry Of blob Storage works together :

  • Set up key/value
  • obtain key
  • Delete key

More information about implementing your own backend , Please check out sentry.nodestore.base.NodeStorage.

File store

Sentry A file named ‘filestore’ The abstraction of , For storing files ( For example, publish artifacts ).

The default backend stores files in a location that is not suitable for production use /tmp/sentry-files in .

File system backend

filestore.backend: "filesystem"
  location: "/tmp/sentry-files"

Google Cloud Storage back end

In addition to the following configuration , You also need to make sure that shell The environment has set variables GOOGLE_APPLICATION_CREDENTIALS.
For more information , see also Used to set up authentication Google Cloud file .

filestore.backend: "gcs"
  bucket_name: "..."

Amazon S3 Back end

S3 The storage backend supports the use of access key or IAM Instance role for authentication .
When using the latter , Omit access_key and secret_key.
By default ,S3 The object is to use public-read ACL Created , This means that in addition to PutObjectGetObject and DeleteObject outside ,
What is used account / role Must also have PutObjectAcl jurisdiction .
If you don't want your uploaded files to be publicly accessible , You can use default_acl Set to private.

filestore.backend: "s3"
  access_key: "..."
  secret_key: "..."
  bucket_name: "..."
  default_acl: "..."

MinIO S3 Back end

filestore.backend: "s3"
  access_key: "..."
  secret_key: "..."
  bucket_name: "..."
  endpoint_url: ""

Time series storage

Sentry Provide storage time series data Service for . This is mainly used to display the summary information of events and items , as well as ( real time ) Calculate the incident rate .

RedisSnuba Back end ( recommend )

This is a The only one 100% The back end that works correctly

SENTRY_TSDB = 'sentry.tsdb.redissnuba.RedisSnubaTSDB'

The backend is connected to Snuba To communicate with Event ingestion (event ingestion) Relevant indicators , And with Redis Communicate to get everything else .
Snuba Need to run your own As a result, consumers (outcomes consumer), At present, this is not devservices Part of .

Packaged Redis TSDB You can configure it like this ( of Redis Options , Please see below ):

    'redis': ... # RedisTSDB  The options dictionary is here 

Dummy Back end

seeing the name of a thing one thinks of its function , all TSDB The data will be deleted on write and replaced with zero on read :

SENTRY_TSDB = 'sentry.tsdb.dummy.DummyTSDB'

Redis Back end

“ bare ” Redis The back end reads and writes all data to Redis. And event intake (Organization Stats) The relevant columns will display zeroing data , Because this data is only available in Snuba Available in the .

SENTRY_TSDB = 'sentry.tsdb.redis.RedisTSDB'

By default , This will use the name default Of Redis colony . To use different clusters , Please provide cluster Options , As shown below :

    'cluster': 'tsdb',

Write Buffer

Sentry By... Over a period of time buffer Write and refresh bulk changes to the database to manage database row contention . If you have high concurrency , This will be very useful , Especially when they are often the same event .

for example , If you happen to receive 100,000 Events , And among them 10% Report connection problems to the database ( They will be put together ),
Enable buffer The back end will change things , So that each count update is actually put in the queue , All updates are performed at a rate that the queue can keep up .

To configure

To specify the backend , Just modify the... In the configuration SENTRY_BUFFER and SENTRY_BUFFER_OPTIONS value :

SENTRY_BUFFER = 'sentry.buffer.base.Buffer'


To configure Redis Back end Queue required , Otherwise you won't see any benefits ( in fact , You only have a negative impact on performance ).

The configuration is straightforward :

SENTRY_BUFFER = 'sentry.buffer.redis.RedisBuffer'

By default , This will use the name default Of Redis colony . To use different clusters , Please provide cluster Options , As shown below :

    'cluster': 'buffer',


Sentry There is provided a type called ‘metrics’ The abstraction of , For internal control , Usually timing and various counters .

The default backend simply discards them ( Although some values remain in the internal time series database ).

Statsd Back end

SENTRY_METRICS_BACKEND = 'sentry.metrics.statsd.StatsdMetricsBackend'
    'host': 'localhost',
    'port': 8125,

Datadog Back end

Datadog Will ask you to datadog Install the package into your Sentry Environment :

$ pip install datadog

In your in :

SENTRY_METRICS_BACKEND = 'sentry.metrics.datadog.DatadogMetricsBackend'
    'api_key': '...',
    'app_key': '...',
    'tags': {},

After installation ,Sentry The indicators will pass HTTPS Send to Datadog REST API.

DogStatsD Back end

Use DogStatsD The back end needs a Datadog Agent And DogStatsD Run with the backend ( By default, on port 8125).

You must also put datadog Python Install the package into your Sentry Environment :

$ pip install datadog

In your in :

SENTRY_METRICS_BACKEND = 'sentry.metrics.dogstatsd.DogStatsdMetricsBackend'
    'statsd_host': 'localhost',
    'statsd_port': 8125,
    'tags': {},

When the configuration is complete , The indicator backend will be sent to DogStatsD The server , And then through HTTPS Periodically refresh to Datadog.

Logging Back end

LoggingBackend Report all operations to sentry.metrics logger.
In addition to the indicator name and value , Log messages also include additional data , For example, you can use a custom formatter to display instance and tags value .

SENTRY_METRICS_BACKEND = 'sentry.metrics.logging.LoggingBackend'

LOGGING['loggers']['sentry.metrics'] = {
    'level': 'DEBUG',
    'handlers': ['console:metrics'],
    'propagate': False,

LOGGING['formatters']['metrics'] = {
    'format': '[%(levelname)s] %(message)s; instance=%(instance)r; tags=%(tags)r',

LOGGING['handlers']['console:metrics'] = {
    'level': 'DEBUG',
    'class': 'logging.StreamHandler',
    'formatter': 'metrics',

The quota

Use Sentry How it works , You may find yourself in a situation : You will see too much inbound traffic , There is no good way to discard redundant messages . There are several solutions to this , If you encounter this problem , You may want to use them all .

Event quota

Sentry One of the main mechanisms for limiting workload in involves setting event quotas . These can be configured within each project and system , And allows you to limit to 60 Maximum number of events accepted in seconds .

To configure

It mainly realizes the use of Redis, You only need to configure the connection information :

SENTRY_QUOTAS = 'sentry.quotas.redis.RedisQuota'

By default , This will use the name default Of Redis colony . To use different clusters , Please provide cluster Options , As shown below :

    'cluster': 'quota',

If you have other needs , You can look like Redis Realize the same free extension basic Quota class .

System wide rate limit

You can configure the system wide maximum rate per minute limit :

system.rate-limit: 500

for example , In your project in , You can do the following :

from sentry.conf.server import SENTRY_OPTIONS

SENTRY_OPTIONS['system.rate-limit'] = 500

perhaps , If you navigate to /manage/settings/, You will find a management panel , It contains a for setting Rate Limit The option to , This option is stored in the above quota implementation .

User based rate limits

You can configure the maximum rate limit per minute based on users :

auth.user-rate-limit: 100
auth.ip-rate-limit: 100

Project based rate limits

To do project-based rate limiting , Please click... Of the project Settings.
stay Client Keys (DSN) tab Next , Find the speed limit you want key, Click the corresponding Configure Button . This should show key/project-specific Rate limit settings for .

Notification Rate limit

In some cases , You may be concerned about limiting content such as outbound email notifications . To solve this problem ,Sentry A rate limiting subsystem supporting arbitrary rate limiting is provided .

To configure

Same as event quota , It mainly realizes the use of Redis

SENTRY_RATELIMITER = 'sentry.ratelimits.redis.RedisRateLimiter'

By default , This will use the name default Of Redis colony . To use different clusters , Please provide cluster Options , As shown below :

    'cluster': 'ratelimiter',

Notification Abstract

Sentry Provides a service , The service will collect notifications when they occur , And arrange them as aggregations “digest” The notice is transmitted .

To configure

Even though digest The system is configured with a reasonable set of default options , But you can use SENTRY_DIGESTS_OPTIONS Set to fine tune digest Back end behavior , To meet your unique installation needs .
All back ends share a common set of options defined below , Some back ends may also define additional options specific to their respective implementations .

minimum_delay: minimum_delay Option defines the default minimum amount of time ( In seconds ), To schedule after initial scheduling digest Waiting for delivery between . This can be done in Notification Overwrite by item in settings .

maximum_delay: maximum_delay The option defines when scheduling digest Default maximum time to wait for transfer between ( In seconds ). This can be done in Notification Overwrite by item in settings .

increment_delay: increment_delay Option defines the last processing digest After that maximum_delay Before , How long should each observation of an event be delayed .

capacity: capacity Option defines the maximum number of items that should be included in the timeline item Count . Whether this is a hard limit or a soft limit depends on the backend - see also truncation_chance Options .

truncation_chance: truncation_chance Options define add The probability that the operation triggers a timeline truncation so that its size is close to the defined capacity . The value is 1 Will cause the timeline to change every time add Operation was truncated ( Effectively make it a hard limit ), A lower probability increases the chance that the timeline will exceed its expected capacity , But performing operations by avoiding truncation improves add Performance of , Truncation is a potentially expensive operation , Especially on large datasets .

Back end

Dummy Back end

Dummy The backend disables summary scheduling , All notifications will be sent when they occur ( Limited by rate ). This is in version 8 Default for previously created installations digest Back end .

Can pass SENTRY_DIGESTS Set the specified dummy Back end :

SENTRY_DIGESTS = 'sentry.digests.backends.dummy.DummyBackend'

Redis Back end

Redis The back-end using Redis To store schedule And pending notification data . This is a self version 8 Default for installations created since digest Back end .

Redis The back end can be accessed through SENTRY_DIGESTS Set to specify :

SENTRY_DIGESTS = 'sentry.digests.backends.redis.RedisBackend'

Redis The back end accepts several options beyond the basic set , adopt SENTRY_DIGESTS_OPTIONS Provide :

cluster : cluster Options define what should be used for storage Redis colony . If no cluster is specified , Then use default colony .

Writing data to digest Change after backend cluster Values or cluster configurations can cause unexpected effects - namely , It creates the possibility of data loss during cluster size changes . This option should be carefully adjusted on the operating system .

ttl : ttl Option definition recordtimeline and digest Survival time ( In seconds ). This can be ( It should be ) Is a relatively high value , because timelinedigest and record Should be deleted after processing —— This is mainly to ensure that outdated data will not stay for too long in case of configuration errors . This should be greater than the maximum scheduling delay , To ensure that data is not expelled prematurely .

Sample configuration

SENTRY_DIGESTS = 'sentry.digests.backends.redis.RedisBackend'
    'capacity': 100,
    'cluster': 'digests',


Relay Is a method for event filtering 、 Rate limiting and processing services . It can be used as :


Back end Chart Rendering

Sentry The front end of provides users with various types of detailed interactive charts , High compliance Sentry The appearance and feel of the product .
historically , These charts are just what we are Web Something that only exists in the application .

However , In some cases , Displaying charts in some context of an application is very valuable . for example

  • Slack an Discover Chart 、 Indicator alert notification 、 Question details or Sentry Any other links in , Among them in Slack It may be useful to view charts in .

  • notice and Summary email . take trend Visualize as a chart .

Fortunately, ,Sentry For the interior Chartcuterie NodeJS The service provides built-in functions , It can go through HTTP API Generate graphics in image format .
The chart is the same as the one used in the front end ECharts Library generated .
Chartcuterie And Sentry Front end shared code , This means that it can be on the front end and back end Chartcuterie Easily maintain the appearance of the chart between the generated charts .

stay Sentry The back end uses Chartcuterie

Use Chartcuterie Generating charts is very simple .

Import generate_chart function , Provide chart The type and data object , Get public pictures URL.

from sentry.charts import generate_chart, ChartType

# The shape of data is determined by the RenderDescriptor in the
# configuration module for the ChartType being rendered.
data = {}

chart_url = generate_chart(ChartType.MY_CHART_TYPE, data)

To configure chart To render

Chartcuterie from Load an external JavaScirpt modular , This module determines how it renders the chart .
The module is directly configured EChart Of options object ,
Included in POST /render Provided to... When called Chartcuterie Conversion of series data .

This module serves as getsentry/sentry Some of them exist ,
Can be in static/app/chartcuterie/config.tsx Find .

Service initialization

You can configure an optional initialization function init Run when the service starts .
This function can access Chartcuterie Overall situation echarts object ,
And you can use it to register Utilities ( for example registerMaps).

add to / Delete chart type

Chart Rendering is based on Every "chart type" Configured .
For each type of chart, You need both front-end applications and back-end chart Declare a well-known name in the module .

  1. On the front end , stay static/app/charctuerie/types.tsx Add a ChartType.

  2. stay static/app/chartcuterie/config.tsx Register in chart Of RenderDescriptor, It describes the appearance and series transformation . You can use register function .

  3. On the back end , stay sentry.charts.types Add a matching... To the module ChartType.

  4. stay Sentry Please confirm your changes . The configuration module will be in 5 Automatically propagate to within minutes Chartcuterie.

    You don't need to deploy Charcuterie.

Don't Deploy and use the new software while configuring the module chart type The function of . Due to propagation delay , Therefore, there is no guarantee of new chart type Available immediately after deployment .

The configuration module includes the deployed Submit SHA, It allows the Chartcuterie In each poll tick Check whether it has received a new configuration module .

Run in development Chartcuterie

To enable... In the local developer environment Chartcuterie, Please start with config.yml Enable it in :

#  Enable  charctuerie
chart-rendering.enabled: true

At present, you need to build configuration modules manually in your development environment .

yarn build-chartcuterie-config

Then you can start Chartcuterie devservice. If devservice Has not started ,
Please check chart-render.enabled key Is it correctly set to true( Use sentry config get chart-rendering.enabled).

sentry devservices up chartcuterie

You can verify that the service has started successfully by checking the log

docker logs -f sentry_chartcuterie

It should be

info: Using polling strategy to resolve configuration...
info: Polling every 5s for config...
info: Server listening for render requests on port 9090
info: Resolved new config via polling: n styles available. {"version":"xxx"}
info: Config polling switching to idle mode
info: Polling every 300s for config...

Your development environment is now ready to invoke Chartcuterie A local instance of .

Update locally chart type

At present , You need to use... Every time you change yarn build-chartcuterie-config Rebuild the configuration module . This may improve in the future .

working principle

Here is Chartcuterie service Several service Figure and how it relates to Sentry Application server interaction .

Chartcuterie start-up

come from Sentry Of Render call

 official account : Hacker afternoon tea 

copyright notice
author[For less],Please bring the original link to reprint, thank you.

Random recommended