API¶
Main Class¶
LogClient (endpoint, accessKeyId, accessKey) |
Construct the LogClient with endpoint, accessKeyId, accessKey. |
LogException (errorCode, errorMessage[, …]) |
The Exception of the log request & response. |
LogResponse (headers[, body]) |
The base response class of all log response. |
Logging Handler Class¶
SimpleLogHandler (end_point, access_key_id, …) |
SimpleLogHandler, blocked sending any logs, just for simple test purpose |
QueuedLogHandler (end_point, access_key_id, …) |
Queued Log Handler, tuned async log handler. |
UwsgiQueuedLogHandler (*args, **kwargs) |
Queued Log Handler for Uwsgi, depends on library uwsgidecorators, need to deploy it separatedly. |
LogFields |
fields used to upload automatically Possible fields: record_name, level, func_name, module, file_path, line_no, process_id, process_name, thread_id, thread_name |
Request and Config Class¶
GetHistogramsRequest ([project, logstore, …]) |
The request used to get histograms of a query from log. |
GetLogsRequest ([project, logstore, …]) |
The request used to get logs by a query from log. |
GetProjectLogsRequest ([project, query]) |
The request used to get logs by a query from log cross multiple logstores. |
ListTopicsRequest ([project, logstore, …]) |
The request used to get topics of a query from log. |
ListLogstoresRequest ([project]) |
The request used to list log store from log. |
PutLogsRequest ([project, logstore, topic, …]) |
The request used to send data to log. |
LogtailConfigGenerator |
Generator of Logtial config |
PluginConfigDetail (logstoreName, configName, …) |
The logtail config for simple mode |
SeperatorFileConfigDetail (logstoreName, …) |
The logtail config for separator mode |
SimpleFileConfigDetail (logstoreName, …[, …]) |
The logtail config for simple mode |
FullRegFileConfigDetail (logstoreName, …[, …]) |
The logtail config for full regex mode |
JsonFileConfigDetail (logstoreName, …[, …]) |
The logtail config for json mode |
ApsaraFileConfigDetail (logstoreName, …[, …]) |
The logtail config for Apsara mode |
SyslogConfigDetail (logstoreName, configName, tag) |
The logtail config for syslog mode |
MachineGroupDetail ([group_name, …]) |
The machine group detail info |
IndexConfig ([ttl, line_config, …]) |
The index config of a logstore |
OssShipperConfig (oss_bucket, oss_prefix, …) |
A oss ship config |
OdpsShipperConfig (odps_endpoint, …[, …]) |
Odps shipper config |
ShipperTask (task_id, task_status, …) |
A shipper task |
Response Class¶
CreateProjectResponse (header[, resp]) |
Response of create_project |
DeleteProjectResponse (header[, resp]) |
|
GetProjectResponse (resp, header) |
|
ListProjectResponse (resp, header) |
GetLogsResponse (resp, header) |
The response of the GetLog API from log. |
ListLogstoresResponse (resp, header) |
The response of the ListLogstores API from log. |
ListTopicsResponse (resp, header) |
The response of the ListTopic API from log. |
GetCursorResponse (resp, header) |
The response of the get_cursor API from log. |
GetCursorTimeResponse (resp, header) |
The response of the get_cursor_time API from log. |
ListShardResponse (resp, header) |
The response of the list_shard API from log. |
DeleteShardResponse (header[, resp]) |
The response of the create_logstore API from log. |
GetHistogramsResponse (resp, header) |
The response of the GetHistograms API from log. |
Histogram (fromTime, toTime, count, progress) |
The class used to present the result of log histogram status. |
GetLogsResponse (resp, header) |
The response of the GetLog API from log. |
QueriedLog (timestamp, source, contents) |
The QueriedLog is a log of the GetLogsResponse which obtained from the log. |
PullLogResponse (resp, header) |
The response of the pull_logs API from log. |
CreateIndexResponse (header[, resp]) |
The response of the create_index API from log. |
UpdateIndexResponse (header[, resp]) |
The response of the update_index API from log. |
DeleteIndexResponse (header[, resp]) |
The response of the delete_index API from log. |
GetIndexResponse (resp, header) |
The response of the get_index_config API from log. |
CreateLogtailConfigResponse (header[, resp]) |
The response of the create_logtail_config API from log. |
DeleteLogtailConfigResponse (header[, resp]) |
The response of the delete_logtail_config API from log. |
GetLogtailConfigResponse (resp, header) |
The response of the get_logtail_config API from log. |
UpdateLogtailConfigResponse (header[, resp]) |
The response of the update_logtail_config API from log. |
ListLogtailConfigResponse (resp, header) |
The response of the list_logtail_config API from log. |
CreateMachineGroupResponse (header[, resp]) |
The response of the create_machine_group API from log. |
DeleteMachineGroupResponse (header[, resp]) |
The response of the delete_machine_group API from log. |
GetMachineGroupResponse (resp, header) |
The response of the get_machine_group API from log. |
UpdateMachineGroupResponse (header[, resp]) |
The response of the update_machine_group API from log. |
ListMachineGroupResponse (resp, header) |
The response of the list_machine_group API from log. |
ListMachinesResponse (resp, header) |
The response of the list_machines API from log. |
ApplyConfigToMachineGroupResponse (header[, resp]) |
The response of the apply_config_to_machine_group API from log. |
RemoveConfigToMachineGroupResponse (header[, …]) |
The response of the remove_config_to_machine_group API from log. |
GetMachineGroupAppliedConfigResponse (resp, …) |
The response of the get_machine_group_applied_config API from log. |
GetConfigAppliedMachineGroupsResponse (resp, …) |
The response of the get_config_applied_machine_group API from log. |
CreateShipperResponse (header[, resp]) |
|
UpdateShipperResponse (header[, resp]) |
|
DeleteShipperResponse (header[, resp]) |
|
GetShipperConfigResponse (resp, header) |
|
ListShipperResponse (resp, header) |
|
GetShipperTasksResponse (resp, header) |
|
RetryShipperTasksResponse (header[, resp]) |
ConsumerGroupEntity (consumer_group_name, timeout) |
|
CreateConsumerGroupResponse (headers[, resp]) |
|
ConsumerGroupCheckPointResponse (resp, headers) |
|
ConsumerGroupHeartBeatResponse (resp, headers) |
|
ConsumerGroupUpdateCheckPointResponse (headers) |
|
DeleteConsumerGroupResponse (headers[, resp]) |
|
ListConsumerGroupResponse (resp, headers) |
|
UpdateConsumerGroupResponse (headers, resp) |
CreateEntityResponse (headers[, body]) |
|
UpdateEntityResponse (headers[, body]) |
|
DeleteEntityResponse (headers[, body]) |
|
GetEntityResponse (headers[, body]) |
|
ListEntityResponse (header, resp[, …]) |
ES Migration Class¶
MigrationManager ([hosts, indexes, query, …]) |
MigrationManager, migrate data from elasticsearch to aliyun log service |
Project¶
list_project ([offset, size]) |
list the project Unsuccessful opertaion will cause an LogException. |
create_project (project_name, project_des) |
Create a project Unsuccessful opertaion will cause an LogException. |
get_project (project_name) |
get project Unsuccessful opertaion will cause an LogException. |
delete_project (project_name) |
delete project Unsuccessful opertaion will cause an LogException. |
copy_project (from_project, to_project[, …]) |
copy project, logstore, machine group and logtail config to target project, expecting the target project doesn’t contain same named logstores as source project |
Logstore¶
copy_logstore (from_project, from_logstore, …) |
copy logstore, index, logtail config to target logstore, machine group are not included yet. |
list_logstore (project_name[, …]) |
list the logstore in a projectListLogStoreResponse Unsuccessful opertaion will cause an LogException. |
create_logstore (project_name, logstore_name) |
create log store Unsuccessful opertaion will cause an LogException. |
get_logstore (project_name, logstore_name) |
get the logstore meta info Unsuccessful opertaion will cause an LogException. |
update_logstore (project_name, logstore_name) |
update the logstore meta info Unsuccessful opertaion will cause an LogException. |
delete_logstore (project_name, logstore_name) |
delete log store Unsuccessful opertaion will cause an LogException. |
list_topics (request) |
List all topics in a logstore. |
Index¶
create_index (project_name, logstore_name, …) |
create index for a logstore Unsuccessful opertaion will cause an LogException. |
update_index (project_name, logstore_name, …) |
update index for a logstore Unsuccessful opertaion will cause an LogException. |
delete_index (project_name, logstore_name) |
delete index of a logstore Unsuccessful opertaion will cause an LogException. |
get_index_config (project_name, logstore_name) |
get index config detail of a logstore Unsuccessful opertaion will cause an LogException. |
Logtail Config¶
create_logtail_config (project_name, …) |
create logtail config in a project Unsuccessful opertaion will cause an LogException. |
update_logtail_config (project_name, …) |
update logtail config in a project Unsuccessful opertaion will cause an LogException. |
delete_logtail_config (project_name, config_name) |
delete logtail config in a project Unsuccessful opertaion will cause an LogException. |
get_logtail_config (project_name, config_name) |
get logtail config in a project Unsuccessful opertaion will cause an LogException. |
list_logtail_config (project_name[, offset, size]) |
list logtail config name in a project Unsuccessful opertaion will cause an LogException. |
Machine Group¶
create_machine_group (project_name, group_detail) |
create machine group in a project Unsuccessful opertaion will cause an LogException. |
delete_machine_group (project_name, group_name) |
delete machine group in a project Unsuccessful opertaion will cause an LogException. |
update_machine_group (project_name, group_detail) |
update machine group in a project Unsuccessful opertaion will cause an LogException. |
get_machine_group (project_name, group_name) |
get machine group in a project Unsuccessful opertaion will cause an LogException. |
list_machine_group (project_name[, offset, size]) |
list machine group names in a project Unsuccessful opertaion will cause an LogException. |
list_machines (project_name, group_name[, …]) |
list machines in a machine group Unsuccessful opertaion will cause an LogException. |
Apply Logtail Config¶
apply_config_to_machine_group (project_name, …) |
apply a logtail config to a machine group Unsuccessful opertaion will cause an LogException. |
remove_config_to_machine_group (project_name, …) |
remove a logtail config to a machine group Unsuccessful opertaion will cause an LogException. |
get_machine_group_applied_configs (…) |
get the logtail config names applied in a machine group Unsuccessful opertaion will cause an LogException. |
get_config_applied_machine_groups (…) |
get machine group names where the logtail config applies to Unsuccessful opertaion will cause an LogException. |
Shard¶
list_shards (project_name, logstore_name) |
list the shard meta of a logstore Unsuccessful opertaion will cause an LogException. |
split_shard (project_name, logstore_name, …) |
split a readwrite shard into two shards Unsuccessful opertaion will cause an LogException. |
merge_shard (project_name, logstore_name, shardId) |
split two adjacent readwrite hards into one shards Unsuccessful opertaion will cause an LogException. |
Cursor¶
get_cursor (project_name, logstore_name, …) |
Get cursor from log service for batch pull logs Unsuccessful opertaion will cause an LogException. |
get_cursor_time (project_name, logstore_name, …) |
Get cursor time from log service Unsuccessful opertaion will cause an LogException. |
get_previous_cursor_time (project_name, …) |
Get previous cursor time from log service. |
get_begin_cursor (project_name, …) |
Get begin cursor from log service for batch pull logs Unsuccessful opertaion will cause an LogException. |
get_end_cursor (project_name, logstore_name, …) |
Get end cursor from log service for batch pull logs Unsuccessful opertaion will cause an LogException. |
Logs¶
put_logs (request) |
Put logs to log service. |
pull_logs (project_name, logstore_name, …) |
batch pull log data from log service Unsuccessful opertaion will cause an LogException. |
pull_log (project_name, logstore_name, …[, …]) |
batch pull log data from log service using time-range Unsuccessful opertaion will cause an LogException. |
pull_log_dump (project_name, logstore_name, …) |
dump all logs seperatedly line into file_path, file_path, the time parameters are log received time on server side. |
get_log (project, logstore, from_time, to_time) |
Get logs from log service. |
get_logs (request) |
Get logs from log service. |
get_log_all (project, logstore, from_time, …) |
Get logs from log service. |
get_histograms (request) |
Get histograms of requested query from log service. |
get_project_logs (request) |
Get logs from log service. |
Consumer group¶
create_consumer_group (project, logstore, …) |
create consumer group |
update_consumer_group (project, logstore, …) |
Update consumer group |
delete_consumer_group (project, logstore, …) |
Delete consumer group |
list_consumer_group (project, logstore) |
List consumer group |
update_check_point (project, logstore, …[, …]) |
Update check point |
get_check_point (project, logstore, …[, shard]) |
Get check point |
Dashboard¶
list_dashboard (project[, offset, size]) |
list the Dashboard, get first 100 items by default Unsuccessful opertaion will cause an LogException. |
create_dashboard (project, detail) |
Create Dashboard. |
get_dashboard (project, entity) |
Get Dashboard. |
update_dashboard (project, detail) |
Update Dashboard. |
delete_dashboard (project, entity) |
Delete Dashboard. |
Saved search¶
list_savedsearch (project[, offset, size]) |
list the Savedsearch, get first 100 items by default Unsuccessful opertaion will cause an LogException. |
create_savedsearch (project, detail) |
Create Savedsearch. |
get_savedsearch (project, entity) |
Get Savedsearch. |
update_savedsearch (project, detail) |
Update Savedsearch. |
delete_savedsearch (project, entity) |
Delete Savedsearch. |
Alert¶
list_alert (project[, offset, size]) |
list the Alert, get first 100 items by default Unsuccessful opertaion will cause an LogException. |
create_alert (project, detail) |
Create Alert. |
get_alert (project, entity) |
Get Alert. |
update_alert (project, detail) |
Update Alert. |
delete_alert (project, entity) |
Delete Alert. |
Shipper¶
create_shipper (project, logstore, detail) |
Create Shipper. |
update_shipper (project, logstore, detail) |
Update Shipper. |
delete_shipper (project, logstore, entity) |
Delete Shipper. |
get_shipper (project, logstore, entity) |
Get Shipper. |
list_shipper (project, logstore[, offset, size]) |
list the Shipper, get first 100 items by default Unsuccessful opertaion will cause an LogException. |
get_shipper_tasks (project_name, …[, …]) |
get odps/oss shipper tasks in a certain time range Unsuccessful opertaion will cause an LogException. |
retry_shipper_tasks (project_name, …) |
retry failed tasks , only the failed task can be retried Unsuccessful opertaion will cause an LogException. |
Definitions¶
-
class
aliyun.log.
LogClient
(endpoint, accessKeyId, accessKey, securityToken=None, source=None)[source]¶ Construct the LogClient with endpoint, accessKeyId, accessKey.
Parameters: - endpoint (string) – log service host name, for example, ch-hangzhou.log.aliyuncs.com or https://cn-beijing.log.aliyuncs.com
- accessKeyId (string) – aliyun accessKeyId
- accessKey (string) – aliyun accessKey
-
apply_config_to_machine_group
(project_name, config_name, group_name)[source]¶ apply a logtail config to a machine group Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- config_name (string) – the logtail config name to apply
- group_name (string) – the machine group name
Returns: ApplyConfigToMachineGroupResponse
Raise: LogException
-
arrange_shard
(project, logstore, count)[source]¶ arrange shard to the expected read-write count to a larger one.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- count (int) – expected read-write shard count. should be larger than the current one.
Returns: ’‘
Raise: LogException
-
copy_data
(project, logstore, from_time, to_time=None, to_client=None, to_project=None, to_logstore=None, shard_list=None, batch_size=None, compress=None, new_topic=None, new_source=None)[source]¶ copy data from one logstore to another one (could be the same or in different region), the time is log received time on server side.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- from_time (string/int) – curosr value, could be begin, timestamp or readable time in readable time like “%Y-%m-%d %H:%M:%S<time_zone>” e.g. “2018-01-02 12:12:10+8:00”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- to_time (string/int) – curosr value, default is “end”, could be begin, timestamp or readable time in readable time like “%Y-%m-%d %H:%M:%S<time_zone>” e.g. “2018-01-02 12:12:10+8:00”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- to_client (LogClient) – logclient instance, if empty will use source client
- to_project (string) – project name, if empty will use source project
- to_logstore (string) – logstore name, if empty will use source logstore
- shard_list (string) – shard number list. could be comma seperated list or range: 1,20,31-40
- batch_size (int) – batch size to fetch the data in each iteration. by default it’s 500
- compress (bool) – if use compression, by default it’s True
- new_topic (string) – overwrite the copied topic with the passed one
- new_source (string) – overwrite the copied source with the passed one
Returns: LogResponse {“total_count”: 30, “shards”: {0: 10, 1: 20} })
-
copy_logstore
(from_project, from_logstore, to_logstore, to_project=None, to_client=None)[source]¶ copy logstore, index, logtail config to target logstore, machine group are not included yet. the target logstore will be crated if not existing
Parameters: - from_project (string) – project name
- from_logstore (string) – logstore name
- to_logstore (string) – target logstore name
- to_project (string) – target project name, copy to same project if not being specified, will try to create it if not being specified
- to_client (LogClient) – logclient instance, use it to operate on the “to_project” if being specified for cross region purpose
Returns:
-
copy_project
(from_project, to_project, to_client=None, copy_machine_group=False)[source]¶ copy project, logstore, machine group and logtail config to target project, expecting the target project doesn’t contain same named logstores as source project
Parameters: - from_project (string) – project name
- to_project (string) – project name
- to_client (LogClient) – logclient instance
- copy_machine_group (bool) – if copy machine group resources, False by default.
Returns: None
-
create_alert
(project, detail)¶ Create Alert. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- detail (dict/string) – json string
Returns: CreateEntityResponse
Raise: LogException
-
create_consumer_group
(project, logstore, consumer_group, timeout, in_order=False)[source]¶ create consumer group
Parameters: - project (string) – project name
- logstore (string) – logstore name
- consumer_group (string) – consumer group name
- timeout (int) – time-out
- in_order (bool) – if consume in order, default is False
Returns: CreateConsumerGroupResponse
-
create_dashboard
(project, detail)¶ Create Dashboard. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- detail (dict/string) – json string
Returns: CreateEntityResponse
Raise: LogException
-
create_external_store
(project_name, config)[source]¶ create log store Unsuccessful opertaion will cause an LogException.
Parameters: project_name (string) – the Project name :type config : ExternalStoreConfig :param config :external store config
Returns: CreateExternalStoreResponse Raise: LogException
-
create_index
(project_name, logstore_name, index_detail)[source]¶ create index for a logstore Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- index_detail (IndexConfig) – the index config detail used to create index
Returns: CreateIndexResponse
Raise: LogException
-
create_logstore
(project_name, logstore_name, ttl=30, shard_count=2, enable_tracking=False, append_meta=False, auto_split=True, max_split_shard=64, preserve_storage=False)[source]¶ create log store Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- ttl (int) – the life cycle of log in the logstore in days, default 30, up to 3650
- shard_count (int) – the shard count of the logstore to create, default 2
- enable_tracking (bool) – enable web tracking, default is False
- append_meta (bool) – allow to append meta info (server received time and IP for external IP to each received log)
- auto_split (bool) – auto split shard, max_split_shard will be 64 by default is True
- max_split_shard (int) – max shard to split, up to 64
- preserve_storage (bool) – if always persist data, TTL will be ignored.
Returns: CreateLogStoreResponse
Raise: LogException
-
create_logtail_config
(project_name, config_detail)[source]¶ create logtail config in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- config_detail (LogtailConfigGenerator or SeperatorFileConfigDetail or SimpleFileConfigDetail or FullRegFileConfigDetail or JsonFileConfigDetail or ApsaraFileConfigDetail or SyslogConfigDetail or CommonRegLogConfigDetail) – the logtail config detail info, use LogtailConfigGenerator.from_json to generate config: SeperatorFileConfigDetail or SimpleFileConfigDetail or FullRegFileConfigDetail or JsonFileConfigDetail or ApsaraFileConfigDetail or SyslogConfigDetail, Note: CommonRegLogConfigDetail is deprecated.
Returns: CreateLogtailConfigResponse
Raise: LogException
-
create_machine_group
(project_name, group_detail)[source]¶ create machine group in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- group_detail (MachineGroupDetail) – the machine group detail config
Returns: CreateMachineGroupResponse
Raise: LogException
-
create_project
(project_name, project_des)[source]¶ Create a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- project_des (string) – the description of a project
Returns: CreateProjectResponse
Raise: LogException
-
create_savedsearch
(project, detail)¶ Create Savedsearch. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- detail (dict/string) – json string
Returns: CreateEntityResponse
Raise: LogException
-
create_shipper
(project, logstore, detail)¶ Create Shipper. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- detail (dict/string) – json string
Returns: CreateEntityResponse
Raise: LogException
-
delete_alert
(project, entity)¶ Delete Alert. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- entity (string) – alert name
Returns: DeleteEntityResponse
Raise: LogException
-
delete_consumer_group
(project, logstore, consumer_group)[source]¶ Delete consumer group
Parameters: - project (string) – project name
- logstore (string) – logstore name
- consumer_group (string) – consumer group name
Returns: None
-
delete_dashboard
(project, entity)¶ Delete Dashboard. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- entity (string) – dashboard name
Returns: DeleteEntityResponse
Raise: LogException
-
delete_external_store
(project_name, store_name)[source]¶ delete log store Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- store_name (string) – the external store name
Returns: DeleteExternalStoreResponse
Raise: LogException
-
delete_index
(project_name, logstore_name)[source]¶ delete index of a logstore Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
Returns: DeleteIndexResponse
Raise: LogException
-
delete_logstore
(project_name, logstore_name)[source]¶ delete log store Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
Returns: DeleteLogStoreResponse
Raise: LogException
-
delete_logtail_config
(project_name, config_name)[source]¶ delete logtail config in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- config_name (string) – the logtail config name
Returns: DeleteLogtailConfigResponse
Raise: LogException
-
delete_machine_group
(project_name, group_name)[source]¶ delete machine group in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- group_name (string) – the group name
Returns: DeleteMachineGroupResponse
Raise: LogException
-
delete_project
(project_name)[source]¶ delete project Unsuccessful opertaion will cause an LogException.
Parameters: project_name (string) – the Project name Returns: DeleteProjectResponse Raise: LogException
-
delete_savedsearch
(project, entity)¶ Delete Savedsearch. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- entity (string) – savedsearch name
Returns: DeleteEntityResponse
Raise: LogException
-
delete_shard
(project_name, logstore_name, shardId)[source]¶ delete a readonly shard Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shardId (int) – the read only shard id
Returns: ListShardResponse
Raise: LogException
-
delete_shipper
(project, logstore, entity)¶ Delete Shipper. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- entity (string) – shipper name
Returns: DeleteEntityResponse
Raise: LogException
-
es_migration
(hosts, project_name, indexes=None, query=None, scroll='5m', logstore_index_mappings=None, pool_size=10, time_reference=None, source=None, topic=None, wait_time_in_secs=60, auto_creation=True)[source]¶ migrate data from elasticsearch to aliyun log service
Parameters: - hosts (string) – a comma-separated list of source ES nodes. e.g. “localhost:9200,other_host:9200”
- project_name (string) – specify the project_name of your log services. e.g. “your_project”
- indexes (string) – a comma-separated list of source index names. e.g. “index1,index2”
- query (string) – used to filter docs, so that you can specify the docs you want to migrate. e.g. ‘{“query”: {“match”: {“title”: “python”}}}’
- scroll (string) – specify how long a consistent view of the index should be maintained for scrolled search. e.g. “5m”
- logstore_index_mappings (string) – specify the mappings of log service logstore and ES index. e.g. ‘{“logstore1”: “my_index*”, “logstore2”: “index1,index2”}, “logstore3”: “index3”}’
- pool_size (int) – specify the size of process pool. e.g. 10
- time_reference (string) – specify what ES doc’s field to use as log’s time field. e.g. “field1”
- source (string) – specify the value of log’s source field. e.g. “your_source”
- topic (string) – specify the value of log’s topic field. e.g. “your_topic”
- wait_time_in_secs (int) – specify the waiting time between initialize aliyun log and executing data migration task. e.g. 60
- auto_creation (bool) – specify whether to let the tool create logstore and index automatically for you. e.g. True
Returns: MigrationResponse
Raise: Exception
-
get_alert
(project, entity)¶ Get Alert. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- entity (string) – alert name
Returns: GetEntityResponse
Raise: LogException
-
get_begin_cursor
(project_name, logstore_name, shard_id)[source]¶ Get begin cursor from log service for batch pull logs Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shard_id (int) – the shard id
Returns: GetLogsResponse
Raise: LogException
-
get_check_point
(project, logstore, consumer_group, shard=-1)[source]¶ Get check point
Parameters: - project (string) – project name
- logstore (string) – logstore name
- consumer_group (string) – consumer group name
- shard (int) – shard id
Returns: ConsumerGroupCheckPointResponse
-
get_check_point_fixed
(project, logstore, consumer_group, shard=-1)[source]¶ Get check point
Parameters: - project (string) – project name
- logstore (string) – logstore name
- consumer_group (string) – consumer group name
- shard (int) – shard id
Returns: ConsumerGroupCheckPointResponse
-
get_config_applied_machine_groups
(project_name, config_name)[source]¶ get machine group names where the logtail config applies to Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- config_name (string) – the logtail config name used to apply
Returns: GetConfigAppliedMachineGroupsResponse
Raise: LogException
-
get_cursor
(project_name, logstore_name, shard_id, start_time)[source]¶ Get cursor from log service for batch pull logs Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shard_id (int) – the shard id
- start_time (string/int) – the start time of cursor, e.g 1441093445 or “begin”/”end”, or readable time like “%Y-%m-%d %H:%M:%S<time_zone>” e.g. “2018-01-02 12:12:10+8:00”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
Returns: GetCursorResponse
Raise: LogException
-
get_cursor_time
(project_name, logstore_name, shard_id, cursor)[source]¶ Get cursor time from log service Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shard_id (int) – the shard id
- cursor (string) – the cursor to get its service receive time
Returns: GetCursorTimeResponse
Raise: LogException
-
get_dashboard
(project, entity)¶ Get Dashboard. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- entity (string) – dashboard name
Returns: GetEntityResponse
Raise: LogException
-
get_end_cursor
(project_name, logstore_name, shard_id)[source]¶ Get end cursor from log service for batch pull logs Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shard_id (int) – the shard id
Returns: GetLogsResponse
Raise: LogException
-
get_external_store
(project_name, store_name)[source]¶ get the logstore meta info Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- store_name (string) – the logstore name
Returns: GetLogStoreResponse
Raise: LogException
-
get_histograms
(request)[source]¶ Get histograms of requested query from log service. Unsuccessful opertaion will cause an LogException.
Parameters: request (GetHistogramsRequest) – the GetHistograms request parameters class. Returns: GetHistogramsResponse Raise: LogException
-
get_index_config
(project_name, logstore_name)[source]¶ get index config detail of a logstore Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
Returns: GetIndexResponse
Raise: LogException
-
get_log
(project, logstore, from_time, to_time, topic=None, query=None, reverse=False, offset=0, size=100)[source]¶ Get logs from log service. will retry when incomplete. Unsuccessful opertaion will cause an LogException. Note: for larger volume of data (e.g. > 1 million logs), use get_log_all
Parameters: - project (string) – project name
- logstore (string) – logstore name
- from_time (int/string) – the begin timestamp or format of time in readable time like “%Y-%m-%d %H:%M:%S<time_zone>” e.g. “2018-01-02 12:12:10+8:00”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- to_time (int/string) – the end timestamp or format of time in readable time like “%Y-%m-%d %H:%M:%S<time_zone>” e.g. “2018-01-02 12:12:10+8:00”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- topic (string) – topic name of logs, could be None
- query (string) – user defined query, could be None
- reverse (bool) – if reverse is set to true, the query will return the latest logs first, default is false
- offset (int) – line offset of return logs
- size (int) – max line number of return logs, -1 means get all
Returns: GetLogsResponse
Raise: LogException
-
get_log_all
(project, logstore, from_time, to_time, topic=None, query=None, reverse=False, offset=0)[source]¶ Get logs from log service. will retry when incomplete. Unsuccessful opertaion will cause an LogException. different with get_log with size=-1, It will try to iteratively fetch all data every 100 items and yield them, in CLI, it could apply jmes filter to each batch and make it possible to fetch larger volume of data.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- from_time (int/string) – the begin timestamp or format of time in readable time like “%Y-%m-%d %H:%M:%S<time_zone>” e.g. “2018-01-02 12:12:10+8:00”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- to_time (int/string) – the end timestamp or format of time in readable time like “%Y-%m-%d %H:%M:%S<time_zone>” e.g. “2018-01-02 12:12:10+8:00”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- topic (string) – topic name of logs, could be None
- query (string) – user defined query, could be None
- reverse (bool) – if reverse is set to true, the query will return the latest logs first, default is false
- offset (int) – offset to start, by default is 0
Returns: GetLogsResponse iterator
Raise: LogException
-
get_logs
(request)[source]¶ Get logs from log service. Unsuccessful opertaion will cause an LogException. Note: for larger volume of data (e.g. > 1 million logs), use get_log_all
Parameters: request (GetLogsRequest) – the GetLogs request parameters class. Returns: GetLogsResponse Raise: LogException
-
get_logstore
(project_name, logstore_name)[source]¶ get the logstore meta info Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
Returns: GetLogStoreResponse
Raise: LogException
-
get_logtail_config
(project_name, config_name)[source]¶ get logtail config in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- config_name (string) – the logtail config name
Returns: GetLogtailConfigResponse
Raise: LogException
-
get_machine_group
(project_name, group_name)[source]¶ get machine group in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- group_name (string) – the group name to get
Returns: GetMachineGroupResponse
Raise: LogException
-
get_machine_group_applied_configs
(project_name, group_name)[source]¶ get the logtail config names applied in a machine group Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- group_name (string) – the group name list
Returns: GetMachineGroupAppliedConfigResponse
Raise: LogException
-
get_previous_cursor_time
(project_name, logstore_name, shard_id, cursor, normalize=True)[source]¶ Get previous cursor time from log service. Note: normalize = true: if the cursor is out of range, it will be nornalized to nearest cursor Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shard_id (int) – the shard id
- cursor (string) – the cursor to get its service receive time
- normalize (bool) – fix the cursor or not if it’s out of scope
Returns: GetCursorTimeResponse
Raise: LogException
-
get_project
(project_name)[source]¶ get project Unsuccessful opertaion will cause an LogException.
Parameters: project_name (string) – the Project name Returns: GetProjectResponse Raise: LogException
-
get_project_logs
(request)[source]¶ Get logs from log service. Unsuccessful opertaion will cause an LogException.
Parameters: request (GetProjectLogsRequest) – the GetProjectLogs request parameters class. Returns: GetLogsResponse Raise: LogException
-
get_resource_usage
(project)[source]¶ get resource usage ist the project Unsuccessful opertaion will cause an LogException.
Parameters: client (string) – project name Returns: dict Raise: LogException
-
get_savedsearch
(project, entity)¶ Get Savedsearch. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- entity (string) – savedsearch name
Returns: GetEntityResponse
Raise: LogException
-
get_shipper
(project, logstore, entity)¶ Get Shipper. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- entity (string) – shipper name
Returns: GetEntityResponse
Raise: LogException
-
get_shipper_tasks
(project_name, logstore_name, shipper_name, start_time, end_time, status_type='', offset=0, size=100)[source]¶ get odps/oss shipper tasks in a certain time range Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shipper_name (string) – the shipper name
- start_time (int) – the start timestamp
- end_time (int) – the end timestamp
- status_type (string) – support one of [‘’, ‘fail’, ‘success’, ‘running’] , if the status_type = ‘’ , return all kinds of status type
- offset (int) – the begin task offset, -1 means all
- size (int) – the needed tasks count
Returns: GetShipperTasksResponse
Raise: LogException
-
heart_beat
(project, logstore, consumer_group, consumer, shards=None)[source]¶ Heatbeat consumer group
Parameters: - project (string) – project name
- logstore (string) – logstore name
- consumer_group (string) – consumer group name
- consumer (string) – consumer name
- shards (int list) – shard id list e.g. [0,1,2]
Returns: None
-
list_alert
(project, offset=0, size=100)¶ list the Alert, get first 100 items by default Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – the Project name
- offset (int) – the offset of all the matched names
- size (int) – the max return names count, -1 means all
Returns: ListLogStoreResponse
Raise: LogException
-
list_consumer_group
(project, logstore)[source]¶ List consumer group
Parameters: - project (string) – project name
- logstore (string) – logstore name
Returns: ListConsumerGroupResponse
-
list_dashboard
(project, offset=0, size=100)¶ list the Dashboard, get first 100 items by default Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – the Project name
- offset (int) – the offset of all the matched names
- size (int) – the max return names count, -1 means all
Returns: ListLogStoreResponse
Raise: LogException
-
list_external_store
(project_name, external_store_name_pattern=None, offset=0, size=100)[source]¶ list the logstore in a projectListLogStoreResponse Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name_pattern (string) – the sub name logstore, used for the server to return logstore names contain this sub name
- offset (int) – the offset of all the matched names
- size (int) – the max return names count, -1 means all
Returns: ListLogStoreResponse
Raise: LogException
-
list_logstore
(project_name, logstore_name_pattern=None, offset=0, size=100)[source]¶ list the logstore in a projectListLogStoreResponse Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name_pattern (string) – the sub name logstore, used for the server to return logstore names contain this sub name
- offset (int) – the offset of all the matched names
- size (int) – the max return names count, -1 means all
Returns: ListLogStoreResponse
Raise: LogException
-
list_logstores
(request)[source]¶ List all logstores of requested project. Unsuccessful opertaion will cause an LogException.
Parameters: request (ListLogstoresRequest) – the ListLogstores request parameters class. Returns: ListLogStoresResponse Raise: LogException
-
list_logtail_config
(project_name, offset=0, size=100)[source]¶ list logtail config name in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- offset (int) – the offset of all config names
- size (int) – the max return names count, -1 means all
Returns: ListLogtailConfigResponse
Raise: LogException
-
list_machine_group
(project_name, offset=0, size=100)[source]¶ list machine group names in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- offset (int) – the offset of all group name
- size (int) – the max return names count, -1 means all
Returns: ListMachineGroupResponse
Raise: LogException
-
list_machines
(project_name, group_name, offset=0, size=100)[source]¶ list machines in a machine group Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- group_name (string) – the group name to list
- offset (int) – the offset of all group name
- size (int) – the max return names count, -1 means all
Returns: ListMachinesResponse
Raise: LogException
-
list_project
(offset=0, size=100)[source]¶ list the project Unsuccessful opertaion will cause an LogException.
Parameters: - offset (int) – the offset of all the matched names
- size (int) – the max return names count, -1 means return all data
Returns: ListProjectResponse
Raise: LogException
-
list_savedsearch
(project, offset=0, size=100)¶ list the Savedsearch, get first 100 items by default Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – the Project name
- offset (int) – the offset of all the matched names
- size (int) – the max return names count, -1 means all
Returns: ListLogStoreResponse
Raise: LogException
-
list_shards
(project_name, logstore_name)[source]¶ list the shard meta of a logstore Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
Returns: ListShardResponse
Raise: LogException
-
list_shipper
(project, logstore, offset=0, size=100)¶ list the Shipper, get first 100 items by default Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – the Project name
- logstore (string) – the logstore name
- offset (int) – the offset of all the matched names
- size (int) – the max return names count, -1 means all
Returns: ListLogStoreResponse
Raise: LogException
-
list_topics
(request)[source]¶ List all topics in a logstore. Unsuccessful opertaion will cause an LogException.
Parameters: request (ListTopicsRequest) – the ListTopics request parameters class. Returns: ListTopicsResponse Raise: LogException
-
merge_shard
(project_name, logstore_name, shardId)[source]¶ split two adjacent readwrite hards into one shards Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shardId (int) – the shard id of the left shard, server will determine the right adjacent shardId
Returns: ListShardResponse
Raise: LogException
-
pull_log
(project_name, logstore_name, shard_id, from_time, to_time, batch_size=None, compress=None)[source]¶ batch pull log data from log service using time-range Unsuccessful opertaion will cause an LogException. the time parameter means the time when server receives the logs
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shard_id (int) – the shard id
- from_time (string/int) – curosr value, could be begin, timestamp or readable time in readable time like “%Y-%m-%d %H:%M:%S<time_zone>” e.g. “2018-01-02 12:12:10+8:00”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- to_time (string/int) – curosr value, could be begin, timestamp or readable time in readable time like “%Y-%m-%d %H:%M:%S<time_zone>” e.g. “2018-01-02 12:12:10+8:00”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- batch_size (int) – batch size to fetch the data in each iteration. by default it’s 1000
- compress (bool) – if use compression, by default it’s True
Returns: PullLogResponse
Raise: LogException
-
pull_log_dump
(project_name, logstore_name, from_time, to_time, file_path, batch_size=None, compress=None, encodings=None, shard_list=None, no_escape=None)[source]¶ dump all logs seperatedly line into file_path, file_path, the time parameters are log received time on server side.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- from_time (string/int) – curosr value, could be begin, timestamp or readable time in readable time like “%Y-%m-%d %H:%M:%S<time_zone>” e.g. “2018-01-02 12:12:10+8:00”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- to_time (string/int) – curosr value, could be begin, timestamp or readable time in readable time like “%Y-%m-%d %H:%M:%S<time_zone>” e.g. “2018-01-02 12:12:10+8:00”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- file_path (string) – file path with {} for shard id. e.g. “/data/dump_{}.data”, {} will be replaced with each partition.
- batch_size (int) – batch size to fetch the data in each iteration. by default it’s 500
- compress (bool) – if use compression, by default it’s True
- encodings (string list) – encoding like [“utf8”, “latin1”] etc to dumps the logs in json format to file. default is [“utf8”,]
- shard_list (string) – shard number list. could be comma seperated list or range: 1,20,31-40
- no_escape (bool) – if not_escape the non-ANSI, default is to escape, set it to True if don’t want it.
Returns: LogResponse {“total_count”: 30, “files”: {‘file_path_1’: 10, “file_path_2”: 20} })
Raise: LogException
-
pull_logs
(project_name, logstore_name, shard_id, cursor, count=None, end_cursor=None, compress=None)[source]¶ batch pull log data from log service Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shard_id (int) – the shard id
- cursor (string) – the start to cursor to get data
- count (int) – the required pull log package count, default 1000 packages
- end_cursor (string) – the end cursor position to get data
- compress (boolean) – if use zip compress for transfer data, default is True
Returns: PullLogResponse
Raise: LogException
-
put_log_raw
(project, logstore, log_group, compress=None)[source]¶ Put logs to log service. using raw data in protobuf
Parameters: - project (string) – the Project name
- logstore (string) – the logstore name
- log_group (LogGroup) – log group structure
- compress (boolean) – compress or not, by default is True
Returns: PutLogsResponse
Raise: LogException
-
put_logs
(request)[source]¶ Put logs to log service. up to 512000 logs up to 10MB size Unsuccessful opertaion will cause an LogException.
Parameters: request (PutLogsRequest) – the PutLogs request parameters class Returns: PutLogsResponse Raise: LogException
-
remove_config_to_machine_group
(project_name, config_name, group_name)[source]¶ remove a logtail config to a machine group Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- config_name (string) – the logtail config name to apply
- group_name (string) – the machine group name
Returns: RemoveConfigToMachineGroupResponse
Raise: LogException
-
retry_shipper_tasks
(project_name, logstore_name, shipper_name, task_list)[source]¶ retry failed tasks , only the failed task can be retried Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shipper_name (string) – the shipper name
- task_list (string array) – the failed task_id list, e.g [‘failed_task_id_1’, ‘failed_task_id_2’,…], currently the max retry task count 10 every time
Returns: RetryShipperTasksResponse
Raise: LogException
-
set_source
(source)[source]¶ Set the source of the log client
Parameters: source (string) – new source Returns: None
-
set_user_agent
(user_agent)[source]¶ set user agent
Parameters: user_agent (string) – user agent Returns: None
-
split_shard
(project_name, logstore_name, shardId, split_hash)[source]¶ split a readwrite shard into two shards Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- shardId (int) – the shard id
- split_hash (string) – the internal hash between the shard begin and end hash
Returns: ListShardResponse
Raise: LogException
-
transform_data
(project, logstore, config, from_time, to_time=None, to_client=None, to_project=None, to_logstore=None, shard_list=None, batch_size=None, compress=None, cg_name=None, c_name=None, cg_heartbeat_interval=None, cg_data_fetch_interval=None, cg_in_order=None, cg_worker_pool_size=None)[source]¶ transform data from one logstore to another one (could be the same or in different region), the time passed is log received time on server side. There’re two mode, batch mode / consumer group mode. For Batch mode, just leave the cg_name and later options as None.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- config (string) – transform config imported or path of config (in python)
- from_time (string/int) – curosr value, could be begin, timestamp or readable time in readable time like “%Y-%m-%d %H:%M:%S<time_zone>” e.g. “2018-01-02 12:12:10+8:00”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- to_time (string/int) – curosr value, could be begin, timestamp or readable time in readable time like “%Y-%m-%d %H:%M:%S<time_zone>” e.g. “2018-01-02 12:12:10+8:00”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- to_client (LogClient) – logclient instance, if empty will use source client
- to_project (string) – project name, if empty will use source project
- to_logstore (string) – logstore name, if empty will use source logstore
- shard_list (string) – shard number list. could be comma seperated list or range: 1,20,31-40
- batch_size (int) – batch size to fetch the data in each iteration. by default it’s 500
- compress (bool) – if use compression, by default it’s True
- cg_name (string) – consumer group name to enable scability and availability support.
- c_name (string) – consumer name for consumer group mode, must be different for each consuer in one group, normally leave it as default: CLI-transform-data-${process_id}
- cg_heartbeat_interval (int) – cg_heartbeat_interval, default 20
- cg_data_fetch_interval (int) – cg_data_fetch_interval, default 2
- cg_in_order (bool) – cg_in_order, default False
- cg_worker_pool_size (int) – cg_worker_pool_size, default 2
Returns: LogResponse {“total_count”: 30, “shards”: {0: {“count”: 10, “removed”: 1}, 2: {“count”: 20, “removed”: 1}} })
-
update_alert
(project, detail)¶ Update Alert. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- detail (dict/string) – json string
Returns: UpdateEntityResponse
Raise: LogException
-
update_check_point
(project, logstore, consumer_group, shard, check_point, consumer='', force_success=True)[source]¶ Update check point
Parameters: - project (string) – project name
- logstore (string) – logstore name
- consumer_group (string) – consumer group name
- shard (int) – shard id
- check_point (string) – checkpoint name
- consumer (string) – consumer name
- force_success (bool) – if force to succeed
Returns: None
-
update_consumer_group
(project, logstore, consumer_group, timeout=None, in_order=None)[source]¶ Update consumer group
Parameters: - project (string) – project name
- logstore (string) – logstore name
- consumer_group (string) – consumer group name
- timeout (int) – timeout
- in_order (bool) – order
Returns: None
-
update_dashboard
(project, detail)¶ Update Dashboard. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- detail (dict/string) – json string
Returns: UpdateEntityResponse
Raise: LogException
-
update_external_store
(project_name, config)[source]¶ update the logstore meta info Unsuccessful opertaion will cause an LogException.
:param config : external store config
Returns: UpdateExternalStoreResponse Raise: LogException
-
update_index
(project_name, logstore_name, index_detail)[source]¶ update index for a logstore Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- index_detail (IndexConfig) – the index config detail used to update index
Returns: UpdateIndexResponse
Raise: LogException
-
update_logstore
(project_name, logstore_name, ttl=None, enable_tracking=None, shard_count=None, append_meta=None, auto_split=None, max_split_shard=None, preserve_storage=None)[source]¶ update the logstore meta info Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- logstore_name (string) – the logstore name
- ttl (int) – the life cycle of log in the logstore in days
- enable_tracking (bool) – enable web tracking
- shard_count (int) – deprecated, the shard count could only be updated by split & merge
- append_meta (bool) – allow to append meta info (server received time and IP for external IP to each received log)
- auto_split (bool) – auto split shard, max_split_shard will be 64 by default is True
- max_split_shard (int) – max shard to split, up to 64
- preserve_storage (bool) – if always persist data, TTL will be ignored.
Returns: UpdateLogStoreResponse
Raise: LogException
-
update_logtail_config
(project_name, config_detail)[source]¶ update logtail config in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- config_detail (LogtailConfigGenerator or SeperatorFileConfigDetail or SimpleFileConfigDetail or FullRegFileConfigDetail or JsonFileConfigDetail or ApsaraFileConfigDetail or SyslogConfigDetail or CommonRegLogConfigDetail) – the logtail config detail info, use LogtailConfigGenerator.from_json to generate config: SeperatorFileConfigDetail or SimpleFileConfigDetail or FullRegFileConfigDetail or JsonFileConfigDetail or ApsaraFileConfigDetail or SyslogConfigDetail
Returns: UpdateLogtailConfigResponse
Raise: LogException
-
update_machine_group
(project_name, group_detail)[source]¶ update machine group in a project Unsuccessful opertaion will cause an LogException.
Parameters: - project_name (string) – the Project name
- group_detail (MachineGroupDetail) – the machine group detail config
Returns: UpdateMachineGroupResponse
Raise: LogException
-
update_savedsearch
(project, detail)¶ Update Savedsearch. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- detail (dict/string) – json string
Returns: UpdateEntityResponse
Raise: LogException
-
update_shipper
(project, logstore, detail)¶ Update Shipper. Unsuccessful opertaion will cause an LogException.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- detail (dict/string) – json string
Returns: UpdateEntityResponse
Raise: LogException
-
class
aliyun.log.
LogException
(errorCode, errorMessage, requestId='', resp_status=200, resp_header='', resp_body='')[source]¶ The Exception of the log request & response.
Parameters: - errorCode (string) – log service error code
- errorMessage (string) – detailed information for the exception
- requestId (string) – the request id of the response, ‘’ is set if client error
-
class
aliyun.log.
GetHistogramsRequest
(project=None, logstore=None, fromTime=None, toTime=None, topic=None, query=None)[source]¶ The request used to get histograms of a query from log.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- fromTime (int/string) – the begin time or format of time in readable time like “%Y-%m-%d %H:%M:%S<time_zone>” e.g. “2018-01-02 12:12:10+8:00” e.g. “2018-01-02 12:12:10”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- toTime (int/string) – the end time or format of time in readable time like “%Y-%m-%d %H:%M:%S<time_zone>” e.g. “2018-01-02 12:12:10+8:00” e.g. “2018-01-02 12:12:10”, also support human readable string, e.g. “1 hour ago”, “now”, “yesterday 0:0:0”, refer to https://aliyun-log-cli.readthedocs.io/en/latest/tutorials/tutorial_human_readable_datetime.html
- topic (string) – topic name of logs
- query (string) – user defined query
-
class
aliyun.log.
GetLogsRequest
(project=None, logstore=None, fromTime=None, toTime=None, topic=None, query=None, line=100, offset=0, reverse=False)[source]¶ The request used to get logs by a query from log.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- fromTime (int/string) – the begin time, or format of time in format “%Y-%m-%d %H:%M:%S” e.g. “2018-01-02 12:12:10”
- toTime (int/string) – the end time, or format of time in format “%Y-%m-%d %H:%M:%S” e.g. “2018-01-02 12:12:10”
- topic (string) – topic name of logs
- query (string) – user defined query
- line (int) – max line number of return logs
- offset (int) – line offset of return logs
- reverse (bool) – if reverse is set to true, the query will return the latest logs first
-
class
aliyun.log.
GetProjectLogsRequest
(project=None, query=None)[source]¶ The request used to get logs by a query from log cross multiple logstores.
Parameters: - project (string) – project name
- query (string) – user defined query
-
class
aliyun.log.
IndexConfig
(ttl=1, line_config=None, key_config_list=None, all_keys_config=None, log_reduce=None)[source]¶ The index config of a logstore
Parameters: - ttl (int) – this parameter is deprecated, the ttl is same as logstore’s ttl
- line_config (IndexLineConfig) – the index config of the whole log line
- key_config_list (dict) – dict (string => IndexKeyConfig), the index key configs of the keys
- all_keys_config (IndexKeyConfig) – the key config of all keys, the new create logstore should never user this param, it only used to compatible with old config
- log_reduce (bool) – if to enable logreduce
-
class
aliyun.log.
ListTopicsRequest
(project=None, logstore=None, token=None, line=None)[source]¶ The request used to get topics of a query from log.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- token (string) – the start token to list topics
- line (int) – max topic counts to return
-
class
aliyun.log.
ListLogstoresRequest
(project=None)[source]¶ The request used to list log store from log.
Parameters: project (string) – project name
-
class
aliyun.log.
PluginConfigDetail
(logstoreName, configName, plugin, **extended_items)[source]¶ The logtail config for simple mode
Parameters: - logstoreName (string) – the logstore name
- configName (string) – the config name
- logPath (string) – folder of log path /apsara/nuwa/
- filePattern (string) – file path, e.g. .log, it will be /apsara/nuwa/…/.log
- localStorage (bool) – if use local cache 1GB when logtail is offline. default is True.
- enableRawLog (bool) – if upload raw data in content, default is False
- topicFormat (string) – “none”, “group_topic” or regex to extract value from file path e.g. “/test/(w+).log” will extract each file as topic, default is “none”
- fileEncoding (string) – “utf8” or “gbk” so far
- maxDepth (int) – max depth of folder to scan, by default its 100, 0 means just scan the root folder
- preserve (bool) – if preserve time-out, by default is False, 30 min time-out if set it as True
- preserveDepth (int) – time-out folder depth. 1-3
- filterKey (string list) – only keep log which match the keys. e.g. [“city”, “location”] will only scan files math the two fields
- filterRegex (string list) – matched value for filterKey, e.g. [“shanghai|beijing|nanjing”, “east”] note, it’s regex value list
- createTime (int) – timestamp of created, only useful when getting data from REST
- modifyTime (int) – timestamp of last modified time, only useful when getting data from REST
- extended_items (dict) – extended items
-
class
aliyun.log.
SeperatorFileConfigDetail
(logstoreName, configName, logPath, filePattern, logSample, separator, key, timeKey='', timeFormat=None, localStorage=None, enableRawLog=None, topicFormat=None, fileEncoding=None, maxDepth=None, preserve=None, preserveDepth=None, filterKey=None, filterRegex=None, createTime=None, modifyTime=None, **extended_items)[source]¶ The logtail config for separator mode
Parameters: - logstoreName (string) – the logstore name
- configName (string) – the config name
- logPath (string) – folder of log path /apsara/nuwa/
- filePattern (string) – file path, e.g. .log, it will be /apsara/nuwa/…/.log
- logSample (string) – log sample. e.g. shanghai|2000|east
- separator (string) – ‘ ‘ for tab, ‘ ‘ for space, ‘|’, up to 3 chars like “&&&” or “||” etc.
- key (string list) – keys to map the fields like [“city”, “population”, “location”]
- timeKey (string) – one key name in key to set the time or set it None to use system time.
- timeFormat (string) – whe timeKey is not None, set its format, refer to https://help.aliyun.com/document_detail/28980.html?spm=5176.2020520112.113.4.2243b18eHkxdNB
- localStorage (bool) – if use local cache 1GB when logtail is offline. default is True.
- enableRawLog (bool) – if upload raw data in content, default is False
- topicFormat (string) – “none”, “group_topic” or regex to extract value from file path e.g. “/test/(w+).log” will extract each file as topic, default is “none”
- fileEncoding (string) – “utf8” or “gbk” so far
- maxDepth (int) – max depth of folder to scan, by default its 100, 0 means just scan the root folder
- preserve (bool) – if preserve time-out, by default is False, 30 min time-out if set it as True
- preserveDepth (int) – time-out folder depth. 1-3
- filterKey (string list) – only keep log which match the keys. e.g. [“city”, “location”] will only scan files math the two fields
- filterRegex (string list) – matched value for filterKey, e.g. [“shanghai|beijing|nanjing”, “east”] note, it’s regex value list
- createTime (int) – timestamp of created, only useful when getting data from REST
- modifyTime (int) – timestamp of last modified time, only useful when getting data from REST
- extended_items (dict) – extended items
-
class
aliyun.log.
SimpleFileConfigDetail
(logstoreName, configName, logPath, filePattern, localStorage=None, enableRawLog=None, topicFormat=None, fileEncoding=None, maxDepth=None, preserve=None, preserveDepth=None, filterKey=None, filterRegex=None, **extended_items)[source]¶ The logtail config for simple mode
Parameters: - logstoreName (string) – the logstore name
- configName (string) – the config name
- logPath (string) – folder of log path /apsara/nuwa/
- filePattern (string) – file path, e.g. .log, it will be /apsara/nuwa/…/.log
- localStorage (bool) – if use local cache 1GB when logtail is offline. default is True.
- enableRawLog (bool) – if upload raw data in content, default is False
- topicFormat (string) – “none”, “group_topic” or regex to extract value from file path e.g. “/test/(w+).log” will extract each file as topic, default is “none”
- fileEncoding (string) – “utf8” or “gbk” so far
- maxDepth (int) – max depth of folder to scan, by default its 100, 0 means just scan the root folder
- preserve (bool) – if preserve time-out, by default is False, 30 min time-out if set it as True
- preserveDepth (int) – time-out folder depth. 1-3
- filterKey (string list) – only keep log which match the keys. e.g. [“city”, “location”] will only scan files math the two fields
- filterRegex (string list) – matched value for filterKey, e.g. [“shanghai|beijing|nanjing”, “east”] note, it’s regex value list
- createTime (int) – timestamp of created, only useful when getting data from REST
- modifyTime (int) – timestamp of last modified time, only useful when getting data from REST
- extended_items (dict) – extended items
-
class
aliyun.log.
FullRegFileConfigDetail
(logstoreName, configName, logPath, filePattern, logSample, logBeginRegex=None, regex=None, key=None, timeFormat=None, localStorage=None, enableRawLog=None, topicFormat=None, fileEncoding=None, maxDepth=None, preserve=None, preserveDepth=None, filterKey=None, filterRegex=None, **extended_items)[source]¶ The logtail config for full regex mode
Parameters: - logstoreName (string) – the logstore name
- configName (string) – the config name
- logPath (string) – folder of log path /apsara/nuwa/
- filePattern (string) – file path, e.g. .log, it will be /apsara/nuwa/…/.log
- logSample (string) – log sample. e.g. shanghai|2000|east
- logBeginRegex (string) – regex to match line, None means ‘.*’, just single line mode.
- regex (string) – regex to extract fields form log. None means (.*), just capture whole line
- key (string list) – keys to map the fields like [“city”, “population”, “location”]. None means [“content”]
- timeFormat (string) – whe timeKey is not None, set its format, refer to https://help.aliyun.com/document_detail/28980.html?spm=5176.2020520112.113.4.2243b18eHkxdNB
- localStorage (bool) – if use local cache 1GB when logtail is offline. default is True.
- enableRawLog (bool) – if upload raw data in content, default is False
- topicFormat (string) – “none”, “group_topic” or regex to extract value from file path e.g. “/test/(w+).log” will extract each file as topic, default is “none”
- fileEncoding (string) – “utf8” or “gbk” so far
- maxDepth (int) – max depth of folder to scan, by default its 100, 0 means just scan the root folder
- preserve (bool) – if preserve time-out, by default is False, 30 min time-out if set it as True
- preserveDepth (int) – time-out folder depth. 1-3
- filterKey (string list) – only keep log which match the keys. e.g. [“city”, “location”] will only scan files math the two fields
- filterRegex (string list) – matched value for filterKey, e.g. [“shanghai|beijing|nanjing”, “east”] note, it’s regex value list
- createTime (int) – timestamp of created, only useful when getting data from REST
- modifyTime (int) – timestamp of last modified time, only useful when getting data from REST
- extended_items (dict) – extended items
-
class
aliyun.log.
JsonFileConfigDetail
(logstoreName, configName, logPath, filePattern, timeKey='', timeFormat=None, localStorage=None, enableRawLog=None, topicFormat=None, fileEncoding=None, maxDepth=None, preserve=None, preserveDepth=None, filterKey=None, filterRegex=None, createTime=None, modifyTime=None, **extended_items)[source]¶ The logtail config for json mode
Parameters: - logstoreName (string) – the logstore name
- configName (string) – the config name
- logPath (string) – folder of log path /apsara/nuwa/
- filePattern (string) – file path, e.g. .log, it will be /apsara/nuwa/…/.log
- timeKey (string) – one key name in key to set the time or set it None to use system time.
- timeFormat (string) – whe timeKey is not None, set its format, refer to https://help.aliyun.com/document_detail/28980.html?spm=5176.2020520112.113.4.2243b18eHkxdNB
- localStorage (bool) – if use local cache 1GB when logtail is offline. default is True.
- enableRawLog (bool) – if upload raw data in content, default is False
- topicFormat (string) – “none”, “group_topic” or regex to extract value from file path e.g. “/test/(w+).log” will extract each file as topic, default is “none”
- fileEncoding (string) – “utf8” or “gbk” so far
- maxDepth (int) – max depth of folder to scan, by default its 100, 0 means just scan the root folder
- preserve (bool) – if preserve time-out, by default is False, 30 min time-out if set it as True
- preserveDepth (int) – time-out folder depth. 1-3
- filterKey (string list) – only keep log which match the keys. e.g. [“city”, “location”] will only scan files math the two fields
- filterRegex (string list) – matched value for filterKey, e.g. [“shanghai|beijing|nanjing”, “east”] note, it’s regex value list
- createTime (int) – timestamp of created, only useful when getting data from REST
- modifyTime (int) – timestamp of last modified time, only useful when getting data from REST
- extended_items (dict) – extended items
-
class
aliyun.log.
ApsaraFileConfigDetail
(logstoreName, configName, logPath, filePattern, logBeginRegex, localStorage=None, enableRawLog=None, topicFormat=None, fileEncoding=None, maxDepth=None, preserve=None, preserveDepth=None, filterKey=None, filterRegex=None, createTime=None, modifyTime=None, **extended_items)[source]¶ The logtail config for Apsara mode
Parameters: - logstoreName (string) – the logstore name
- configName (string) – the config name
- logPath (string) – folder of log path /apsara/nuwa/
- filePattern (string) – file path, e.g. .log, it will be /apsara/nuwa/…/.log
- logBeginRegex (string) – regex to match line, None means ‘.*’, just single line mode.
- localStorage (bool) – if use local cache 1GB when logtail is offline. default is True.
- enableRawLog (bool) – if upload raw data in content, default is False
- topicFormat (string) – “none”, “group_topic” or regex to extract value from file path e.g. “/test/(w+).log” will extract each file as topic, default is “none”
- fileEncoding (string) – “utf8” or “gbk” so far
- maxDepth (int) – max depth of folder to scan, by default its 100, 0 means just scan the root folder
- preserve (bool) – if preserve time-out, by default is False, 30 min time-out if set it as True
- preserveDepth (int) – time-out folder depth. 1-3
- filterKey (string list) – only keep log which match the keys. e.g. [“city”, “location”] will only scan files math the two fields
- filterRegex (string list) – matched value for filterKey, e.g. [“shanghai|beijing|nanjing”, “east”] note, it’s regex value list
- createTime (int) – timestamp of created, only useful when getting data from REST
- modifyTime (int) – timestamp of last modified time, only useful when getting data from REST
- extended_items (dict) – extended items
-
class
aliyun.log.
SyslogConfigDetail
(logstoreName, configName, tag, localStorage=None, createTime=None, modifyTime=None, **extended_items)[source]¶ The logtail config for syslog mode
Parameters: - logstoreName (string) – the logstore name
- configName (string) – the config name
- tag (string) – tag for the log captured
- localStorage (bool) – if use local cache 1GB when logtail is offline. default is True.
- createTime (int) – timestamp of created, only useful when getting data from REST
- modifyTime (int) – timestamp of last modified time, only useful when getting data from REST
- extended_items (dict) – extended items
-
class
aliyun.log.
MachineGroupDetail
(group_name=None, machine_type=None, machine_list=None, group_type='', group_attribute=None)[source]¶ The machine group detail info
Parameters: - group_name (string) – group name
- machine_type (string) – “ip” or “userdefined”
- machine_list (string list) – the list of machine ips or machine userdefined, e.g [“127.0.0.1”, “127.0.0.2”]
- group_type (string) – the machine group type, “” or “Armory”
- group_attribute (dict) – the attributes in group, it contains two optional key : 1. “externalName”: only used if the group_type is “Armory”, its the Armory name 2. “groupTopic”: group topic value
-
class
aliyun.log.
PutLogsRequest
(project=None, logstore=None, topic=None, source=None, logitems=None, hashKey=None, compress=True, logtags=None)[source]¶ The request used to send data to log.
Parameters: - project (string) – project name
- logstore (string) – logstore name
- topic (string) – topic name
- source (string) – source of the logs
- logitems (list<LogItem>) – log data
- hashKey (String) – put data with set hash, the data will be send to shard whose range contains the hashKey
- compress (bool) – if need to compress the logs
- logtags (list) – list of key:value tag pair , [(tag_key_1,tag_value_1) , (tag_key_2,tag_value_2)]
-
class
aliyun.log.
ShipperTask
(task_id, task_status, task_message, task_create_time, task_last_data_receive_time, task_finish_time)[source]¶ A shipper task
Parameters: - task_id (string) – the task id
- task_status (string) – one of [‘success’, ‘running’, ‘fail’]
- task_message (string) – the error message of task_status is ‘fail’
- task_create_time (int) – the task create time (timestamp from 1970.1.1)
- task_last_data_receive_time (int) – last log data receive time (timestamp)
- task_finish_time (int) – the task finish time (timestamp)
-
class
aliyun.log.
LogResponse
(headers, body='')[source]¶ The base response class of all log response.
Parameters: headers (dict) – HTTP response header
-
class
aliyun.log.
GetLogsResponse
(resp, header)[source]¶ The response of the GetLog API from log.
Parameters: - resp (dict) – GetLogsResponse HTTP response body
- header (dict) – GetLogsResponse HTTP response header
-
class
aliyun.log.
ListLogstoresResponse
(resp, header)[source]¶ The response of the ListLogstores API from log.
Parameters: - resp (dict) – ListLogstoresResponse HTTP response body
- header (dict) – ListLogstoresResponse HTTP response header
-
class
aliyun.log.
ListTopicsResponse
(resp, header)[source]¶ The response of the ListTopic API from log.
Parameters: - resp (dict) – ListTopicsResponse HTTP response body
- header (dict) – ListTopicsResponse HTTP response header
-
get_count
()[source]¶ Get the number of all the topics from the response
Returns: int, the number of all the topics from the response
-
class
aliyun.log.
GetCursorResponse
(resp, header)[source]¶ The response of the get_cursor API from log.
Parameters: - header (dict) – ListShardResponse HTTP response header
- resp (dict) – the HTTP response body
-
class
aliyun.log.
GetCursorTimeResponse
(resp, header)[source]¶ The response of the get_cursor_time API from log.
Parameters: - header (dict) – GetCursorTimeResponse HTTP response header
- resp (dict) – the HTTP response body
-
class
aliyun.log.
ListShardResponse
(resp, header)[source]¶ The response of the list_shard API from log.
Parameters: - header (dict) – ListShardResponse HTTP response header
- resp (dict) – the HTTP response body
-
class
aliyun.log.
DeleteShardResponse
(header, resp='')[source]¶ The response of the create_logstore API from log.
Parameters: header (dict) – DeleteShardResponse HTTP response header
-
class
aliyun.log.
GetHistogramsResponse
(resp, header)[source]¶ The response of the GetHistograms API from log.
Parameters: - resp (dict) – GetHistogramsResponse HTTP response body
- header (dict) – GetHistogramsResponse HTTP response header
-
get_histograms
()[source]¶ Get histograms on the requested time range: [from, to)
Returns: Histogram list, histograms on the requested time range: [from, to)
-
class
aliyun.log.
Histogram
(fromTime, toTime, count, progress)[source]¶ The class used to present the result of log histogram status. For every log histogram, it contains : from/to time range, hit log count and query completed status.
Parameters: - fromTime (int) – the begin time
- toTime (int) – the end time
- count (int) – log count of histogram that query hits
- progress (string) – histogram query status(Complete or InComplete)
-
class
aliyun.log.
GetLogsResponse
(resp, header)[source] The response of the GetLog API from log.
Parameters: - resp (dict) – GetLogsResponse HTTP response body
- header (dict) – GetLogsResponse HTTP response header
-
get_count
()[source] Get log number from the response
Returns: int, log number
-
get_logs
()[source] Get all logs from the response
Returns: QueriedLog list, all log data
-
is_completed
()[source] Check if the get logs query is completed
Returns: bool, true if this logs query is completed
-
class
aliyun.log.
QueriedLog
(timestamp, source, contents)[source]¶ The QueriedLog is a log of the GetLogsResponse which obtained from the log.
Parameters: - timestamp (int) – log timestamp
- source (string) – log source
- contents (dict) – log contents, content many key/value pair
-
class
aliyun.log.
PullLogResponse
(resp, header)[source]¶ The response of the pull_logs API from log.
Parameters: - header (dict) – PullLogResponse HTTP response header
- resp (string) – the HTTP response body
-
class
aliyun.log.
CreateIndexResponse
(header, resp='')[source]¶ The response of the create_index API from log.
Parameters: header (dict) – CreateIndexResponse HTTP response header
-
class
aliyun.log.
UpdateIndexResponse
(header, resp='')[source]¶ The response of the update_index API from log.
Parameters: header (dict) – UpdateIndexResponse HTTP response header
-
class
aliyun.log.
DeleteIndexResponse
(header, resp='')[source]¶ The response of the delete_index API from log.
Parameters: header (dict) – DeleteIndexResponse HTTP response header
-
class
aliyun.log.
GetIndexResponse
(resp, header)[source]¶ The response of the get_index_config API from log.
Parameters: - header (dict) – GetIndexResponse HTTP response header
- resp (dict) – the HTTP response body
-
class
aliyun.log.
CreateLogtailConfigResponse
(header, resp='')[source]¶ The response of the create_logtail_config API from log.
Parameters: header (dict) – CreateLogtailConfigResponse HTTP response header
-
class
aliyun.log.
DeleteLogtailConfigResponse
(header, resp='')[source]¶ The response of the delete_logtail_config API from log.
Parameters: header (dict) – DeleteLogtailConfigResponse HTTP response header
-
class
aliyun.log.
GetLogtailConfigResponse
(resp, header)[source]¶ The response of the get_logtail_config API from log.
Parameters: - header (dict) – GetLogtailConfigResponse HTTP response header
- resp (dict) – the HTTP response body
-
class
aliyun.log.
UpdateLogtailConfigResponse
(header, resp='')[source]¶ The response of the update_logtail_config API from log.
Parameters: header (dict) – UpdateLogtailConfigResponse HTTP response header
-
class
aliyun.log.
ListLogtailConfigResponse
(resp, header)[source]¶ The response of the list_logtail_config API from log.
Parameters: - header (dict) – ListLogtailConfigResponse HTTP response header
- resp (dict) – the HTTP response body
-
class
aliyun.log.
CreateMachineGroupResponse
(header, resp='')[source]¶ The response of the create_machine_group API from log.
Parameters: header (dict) – CreateMachineGroupResponse HTTP response header
-
class
aliyun.log.
DeleteMachineGroupResponse
(header, resp='')[source]¶ The response of the delete_machine_group API from log.
Parameters: header (dict) – DeleteMachineGroupResponse HTTP response header
-
class
aliyun.log.
GetMachineGroupResponse
(resp, header)[source]¶ The response of the get_machine_group API from log.
Parameters: - header (dict) – GetMachineGroupResponse HTTP response header
- resp (dict) – the HTTP response body
-
class
aliyun.log.
UpdateMachineGroupResponse
(header, resp='')[source]¶ The response of the update_machine_group API from log.
Parameters: header (dict) – UpdateMachineGroupResponse HTTP response header
-
class
aliyun.log.
ListMachineGroupResponse
(resp, header)[source]¶ The response of the list_machine_group API from log.
Parameters: - header (dict) – ListMachineGroupResponse HTTP response header
- resp (dict) – the HTTP response body
-
class
aliyun.log.
ListMachinesResponse
(resp, header)[source]¶ The response of the list_machines API from log.
Parameters: - header (dict) – ListMachinesResponse HTTP response header
- resp (dict) – the HTTP response body
-
class
aliyun.log.
ApplyConfigToMachineGroupResponse
(header, resp='')[source]¶ The response of the apply_config_to_machine_group API from log.
Parameters: header (dict) – ApplyConfigToMachineGroupResponse HTTP response header
-
class
aliyun.log.
RemoveConfigToMachineGroupResponse
(header, resp='')[source]¶ The response of the remove_config_to_machine_group API from log.
Parameters: header (dict) – RemoveConfigToMachineGroupResponse HTTP response header
-
class
aliyun.log.
GetMachineGroupAppliedConfigResponse
(resp, header)[source]¶ The response of the get_machine_group_applied_config API from log.
Parameters: - header (dict) – GetMachineGroupAppliedConfigResponse HTTP response header
- resp (dict) – the HTTP response body
-
class
aliyun.log.
GetConfigAppliedMachineGroupsResponse
(resp, header)[source]¶ The response of the get_config_applied_machine_group API from log.
Parameters: - header (dict) – GetConfigAppliedMachineGroupsResponse HTTP response header
- resp (dict) – the HTTP response body
-
class
aliyun.log.
ConsumerGroupCheckPointResponse
(resp, headers)[source]¶
-
class
aliyun.log.
ListEntityResponse
(header, resp, resource_name=None, entities_key=None)[source]¶
-
class
aliyun.log.
SimpleLogHandler
(end_point, access_key_id, access_key, project, log_store, topic=None, fields=None, buildin_fields_prefix=None, buildin_fields_suffix=None, extract_json=None, extract_json_drop_message=None, extract_json_prefix=None, extract_json_suffix=None, extract_kv=None, extract_kv_drop_message=None, extract_kv_prefix=None, extract_kv_suffix=None, extract_kv_sep=None, extra=None, **kwargs)[source]¶ SimpleLogHandler, blocked sending any logs, just for simple test purpose
Parameters: - end_point – log service endpoint
- access_key_id – access key id
- access_key – access key
- project – project name
- log_store – logstore name
- topic – topic, by default is empty
- fields – list of LogFields or list of names of LogFields, default is LogFields.record_name, LogFields.level, LogFields.func_name, LogFields.module, LogFields.file_path, LogFields.line_no, LogFields.process_id, LogFields.process_name, LogFields.thread_id, LogFields.thread_name, you could also just use he string name like ‘thread_name’, it’s also possible customize extra fields in this list by disable extra fields and put white list here.
- buildin_fields_prefix – prefix of builtin fields, default is empty. suggest using “__” when extract json is True to prevent conflict.
- buildin_fields_suffix – suffix of builtin fields, default is empty. suggest using “__” when extract json is True to prevent conflict.
- extract_json – if extract json automatically, default is False
- extract_json_drop_message – if drop message fields if it’s JSON and extract_json is True, default is False
- extract_json_prefix – prefix of fields extracted from json when extract_json is True. default is “”
- extract_json_suffix – suffix of fields extracted from json when extract_json is True. default is empty
- extract_kv – if extract kv like k1=v1 k2=”v 2” automatically, default is False
- extract_kv_drop_message – if drop message fields if it’s kv and extract_kv is True, default is False
- extract_kv_prefix – prefix of fields extracted from KV when extract_json is True. default is “”
- extract_kv_suffix – suffix of fields extracted from KV when extract_json is True. default is “”
- extract_kv_sep – separator for KV case, defualt is ‘=’, e.g. k1=v1
- extra – if show extra info, default True to show all. default is True. Note: the extra field will also be handled with buildin_fields_prefix/suffix
- kwargs – other parameters passed to logging.Handler
-
class
aliyun.log.
QueuedLogHandler
(end_point, access_key_id, access_key, project, log_store, topic=None, fields=None, queue_size=None, put_wait=None, close_wait=None, batch_size=None, buildin_fields_prefix=None, buildin_fields_suffix=None, extract_json=None, extract_json_drop_message=None, extract_json_prefix=None, extract_json_suffix=None, extract_kv=None, extract_kv_drop_message=None, extract_kv_prefix=None, extract_kv_suffix=None, extract_kv_sep=None, extra=None, **kwargs)[source]¶ Queued Log Handler, tuned async log handler.
Parameters: - end_point – log service endpoint
- access_key_id – access key id
- access_key – access key
- project – project name
- log_store – logstore name
- topic – topic, default is empty
- fields – list of LogFields, default is LogFields.record_name, LogFields.level, LogFields.func_name, LogFields.module, LogFields.file_path, LogFields.line_no, LogFields.process_id, LogFields.process_name, LogFields.thread_id, LogFields.thread_name
- queue_size – queue size, default is 40960 logs, about 10MB ~ 40MB
- put_wait – maximum delay to send the logs, by default 2 seconds and wait double time for when Queue is full.
- close_wait – when program exit, it will try to send all logs in queue in this timeperiod, by default 5 seconds
- batch_size – merge this cound of logs and send them batch, by default min(1024, queue_size)
- buildin_fields_prefix – prefix of builtin fields, default is empty. suggest using “__” when extract json is True to prevent conflict.
- buildin_fields_suffix – suffix of builtin fields, default is empty. suggest using “__” when extract json is True to prevent conflict.
- extract_json – if extract json automatically, default is False
- extract_json_drop_message – if drop message fields if it’s JSON and extract_json is True, default is False
- extract_json_prefix – prefix of fields extracted from json when extract_json is True. default is “”
- extract_json_suffix – suffix of fields extracted from json when extract_json is True. default is empty
- extract_kv – if extract kv like k1=v1 k2=”v 2” automatically, default is False
- extract_kv_drop_message – if drop message fields if it’s kv and extract_kv is True, default is False
- extract_kv_prefix – prefix of fields extracted from KV when extract_json is True. default is “”
- extract_kv_suffix – suffix of fields extracted from KV when extract_json is True. default is “”
- extract_kv_sep – separator for KV case, defualt is ‘=’, e.g. k1=v1
- extra – if show extra info, default True to show all. default is True
- kwargs – other parameters passed to logging.Handler
-
class
aliyun.log.
UwsgiQueuedLogHandler
(*args, **kwargs)[source]¶ Queued Log Handler for Uwsgi, depends on library uwsgidecorators, need to deploy it separatedly.
Parameters: - end_point – log service endpoint
- access_key_id – access key id
- access_key – access key
- project – project name
- log_store – logstore name
- topic – topic, default is empty
- fields – list of LogFields, default is LogFields.record_name, LogFields.level, LogFields.func_name, LogFields.module, LogFields.file_path, LogFields.line_no, LogFields.process_id, LogFields.process_name, LogFields.thread_id, LogFields.thread_name
- queue_size – queue size, default is 40960 logs, about 10MB ~ 40MB
- put_wait – maximum delay to send the logs, by default 2 seconds and wait double time for when Queue is full.
- close_wait – when program exit, it will try to send all logs in queue in this timeperiod, by default 2 seconds
- batch_size – merge this cound of logs and send them batch, by default min(1024, queue_size)
- buildin_fields_prefix – prefix of builtin fields, default is empty. suggest using “__” when extract json is True to prevent conflict.
- buildin_fields_suffix – suffix of builtin fields, default is empty. suggest using “__” when extract json is True to prevent conflict.
- extract_json – if extract json automatically, default is False
- extract_json_drop_message – if drop message fields if it’s JSON and extract_json is True, default is False
- extract_json_prefix – prefix of fields extracted from json when extract_json is True. default is “”
- extract_json_suffix – suffix of fields extracted from json when extract_json is True. default is empty
- extract_kv – if extract kv like k1=v1 k2=”v 2” automatically, default is False
- extract_kv_drop_message – if drop message fields if it’s kv and extract_kv is True, default is False
- extract_kv_prefix – prefix of fields extracted from KV when extract_json is True. default is “”
- extract_kv_suffix – suffix of fields extracted from KV when extract_json is True. default is “”
- extract_kv_sep – separator for KV case, defualt is ‘=’, e.g. k1=v1
- extra – if show extra info, default True to show all. default is True
- kwargs – other parameters passed to logging.Handler
-
class
aliyun.log.
LogFields
[source]¶ fields used to upload automatically Possible fields: record_name, level, func_name, module, file_path, line_no, process_id, process_name, thread_id, thread_name
-
class
aliyun.log.es_migration.
MigrationManager
(hosts=None, indexes=None, query=None, scroll='5m', endpoint=None, project_name=None, access_key_id=None, access_key=None, logstore_index_mappings=None, pool_size=10, time_reference=None, source=None, topic=None, wait_time_in_secs=60, auto_creation=True)[source]¶ MigrationManager, migrate data from elasticsearch to aliyun log service
Parameters: - hosts (string) – a comma-separated list of source ES nodes. e.g. “localhost:9200,other_host:9200”
- indexes (string) – a comma-separated list of source index names. e.g. “index1,index2”
- query (string) – used to filter docs, so that you can specify the docs you want to migrate. e.g. ‘{“query”: {“match”: {“title”: “python”}}}’
- scroll (string) – specify how long a consistent view of the index should be maintained for scrolled search. e.g. “5m”
- endpoint (string) – specify the endpoint of your log services. e.g. “cn-beijing.log.aliyuncs.com”
- project_name (string) – specify the project_name of your log services. e.g. “your_project”
- access_key_id (string) – specify the access_key_id of your account.
- access_key (string) – specify the access_key of your account.
- logstore_index_mappings (string) – specify the mappings of log service logstore and ES index. e.g. ‘{“logstore1”: “my_index*”, “logstore2”: “index1,index2”}, “logstore3”: “index3”}’
- pool_size (int) – specify the size of process pool. e.g. 10
- time_reference (string) – specify what ES doc’s field to use as log’s time field. e.g. “field1”
- source (string) – specify the value of log’s source field. e.g. “your_source”
- topic (string) – specify the value of log’s topic field. e.g. “your_topic”
- wait_time_in_secs (int) – specify the waiting time between initialize aliyun log and executing data migration task. e.g. 60
- auto_creation (bool) – specify whether to let the tool create logstore and index automatically for you. e.g. True