Help on ṕerformance

Started by Carlos Eduardo Sotelo Pintoover 12 years ago12 messagesgeneral
Jump to latest
#1Carlos Eduardo Sotelo Pinto
carlos.sotelo.pinto@gmail.com

I need a help on postgresql performance

I have configurate my postgresql files for tunning my server, however it is
slow and cpu resources are highter than 120%

I have no idea on how to solve this issue, I was trying to search more
infor on google but is not enough, I also have try autovacum sentences and
reindex db, but it continues beeing slow

My app is a gps listener that insert more than 6000 records per minutes
using a tcp server developed on python twisted, where there is no problems,
the problem is when I try to follow the gps devices on a map on a relatime,
I am doing queries each 6 seconds to my database from my django app, for
request last position using a stored procedure, but the query get slow on
more than 50 devices and cpu start to using more than 120% of its resources

Django App connect the postgres database directly, and tcp listener server
for teh devices connect database on threaded way using pgbouncer, I have
not using my django web app on pgbouncer caause I dont want to crash gps
devices connection on the pgbouncer

I hoe you could help on get a better performance

I am attaching my store procedure, my conf files and my cpu, memory
information

**Stored procedure**

CREATE OR REPLACE FUNCTION gps_get_live_location (
_imeis varchar(8)
)
RETURNS TABLE (
imei varchar,
device_id integer,
date_time_process timestamp with time zone,
latitude double precision,
longitude double precision,
course smallint,
speed smallint,
mileage integer,
gps_signal smallint,
gsm_signal smallint,
alarm_status boolean,
gsm_status boolean,
vehicle_status boolean,
alarm_over_speed boolean,
other text,
address varchar
) AS $func$
DECLARE
arr varchar[];
BEGIN
arr := regexp_split_to_array(_imeis, E'\\s+');
FOR i IN 1..array_length(arr, 1) LOOP
RETURN QUERY
SELECT
gpstracking_device_tracks.imei,
gpstracking_device_tracks.device_id,
gpstracking_device_tracks.date_time_process,
gpstracking_device_tracks.latitude,
gpstracking_device_tracks.longitude,
gpstracking_device_tracks.course,
gpstracking_device_tracks.speed,
gpstracking_device_tracks.mileage,
gpstracking_device_tracks.gps_signal,
gpstracking_device_tracks.gsm_signal,
gpstracking_device_tracks.alarm_status,
gpstracking_device_tracks.gps_status,
gpstracking_device_tracks.vehicle_status,
gpstracking_device_tracks.alarm_over_speed,
gpstracking_device_tracks.other,
gpstracking_device_tracks.address
FROM gpstracking_device_tracks
WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR
AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',
now())
AND gpstracking_device_tracks.date_time_process <= NOW()
ORDER BY gpstracking_device_tracks.date_time_process DESC
LIMIT 1;
END LOOP;
RETURN;
END;
$func$
LANGUAGE plpgsql VOLATILE SECURITY DEFINER;

**$ cat less /etc/sysctl.conf**

kernel.shmmax = 6871947673
kernel.shmall = 6871947673
fs.file-max = 4194304

**$ cat /etc/postgresql/9.1/main/postgresql.conf**

data_directory = '/var/lib/postgresql/9.1/main' # use data in
another directory
hba_file = '/etc/postgresql/9.1/main/pg_hba.conf' # host-based
authentication file
ident_file = '/etc/postgresql/9.1/main/pg_ident.conf' # ident
configuration file
external_pid_file = '/var/run/postgresql/9.1-main.pid' # write
an extra PID file
listen_addresses = 'localhost' # what IP address(es) to listen
on;
port = 5432 # (change requires restart)
max_connections = 80 # (change requires restart)
superuser_reserved_connections = 3 # (change requires restart)
unix_socket_directory = '/var/run/postgresql' # (change
requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal
notation
#bonjour = off # advertise server via Bonjour
#bonjour_name = '' # defaults to the computer name
ssl = true # (change requires restart)
#ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH' # allowed SSL
ciphers
#ssl_renegotiation_limit = 512MB # amount of data between
renegotiations
#password_encryption = on
#db_user_namespace = off
#krb_server_keyfile = ''
#krb_srvname = 'postgres' # (Kerberos only)
#krb_caseins_users = off
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# shared_buffers = 4096MB # min 128kB
temp_buffers = 16MB # min 800kB
# work_mem = 80MB # min 64kB
# maintenance_work_mem = 2048MB # min 1MB
max_stack_depth = 4MB # min 100kB
#max_files_per_process = 1000 # min 25
#vacuum_cost_delay = 0ms # 0-100 milliseconds
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 10 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100 # 0-1000 max buffers
written/round
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers
scanned/round
#effective_io_concurrency = 1 # 1-1000. 0 disables prefetching
#wal_level = minimal # minimal, archive, or
hot_standby
#fsync = on # turns forced synchronization
on or off
#synchronous_commit = on # synchronization level; on,
off, or local
#wal_sync_method = fsync # the default is the first
option
#full_page_writes = on # recover from partial page
writes
#wal_buffers = -1 # min 32kB, -1 sets based on
shared_buffers
#wal_writer_delay = 200ms # 1-10000 milliseconds
#commit_delay = 0 # range 0-100000, in
microseconds
#commit_siblings = 5 # range 1-1000
# checkpoint_segments = 64 # in logfile segments, min 1,
16MB each
checkpoint_timeout = 5min # range 30s-1h
# checkpoint_completion_target = 0.5 # checkpoint target duration,
0.0 - 1.0
#checkpoint_warning = 30s # 0 disables
#archive_mode = off # allows archiving to be done
#archive_command = '' # command to use to archive a logfile
segment
#archive_timeout = 0 # force a logfile segment switch after
this
#max_wal_senders = 0 # max number of walsender processes
#wal_sender_delay = 1s # walsender cycle time, 1-10000
milliseconds
#wal_keep_segments = 0 # in logfile segments, 16MB each; 0
disables
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is
delayed
#replication_timeout = 60s # in milliseconds; 0 disables
#synchronous_standby_names = '' # standby servers that provide sync rep
#hot_standby = off # "on" allows queries during
recovery
#max_standby_archive_delay = 30s # max delay before canceling
queries
#max_standby_streaming_delay = 30s # max delay before canceling
queries
#wal_receiver_status_interval = 10s # send replies at least this
often
#hot_standby_feedback = off # send info from standby to
prevent
#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_material = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
#seq_page_cost = 1.0 # measured on an arbitrary scale
#random_page_cost = 4.0 # same scale as above
cpu_tuple_cost = 0.01 # same scale as above
cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
# effective_cache_size = 8192MB
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5 # range 1-10
#geqo_pool_size = 0 # selects default based on
effort
#geqo_generations = 0 # selects default based on
effort
#geqo_selection_bias = 2.0 # range 1.5-2.0
#geqo_seed = 0.0 # range 0.0-1.0
#default_statistics_target = 100 # range 1-10000
#constraint_exclusion = partition # on, off, or partition
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit = 8 # 1 disables collapsing of
explicit
#log_destination = 'stderr' # Valid values are combinations
of
#logging_collector = off # Enable capturing of stderr
and csvlog
# These are only used if logging_collector is on:
#log_directory = 'pg_log' # directory where log files are
written,
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name
pattern,
#log_file_mode = 0600 # creation mode for log files,
#log_truncate_on_rotation = off # If on, an existing log file
with the
#log_rotation_age = 1d # Automatic rotation of
logfiles will
#log_rotation_size = 10MB # Automatic rotation of
logfiles will
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#silent_mode = off # Run server silently.
#client_min_messages = notice # values in order of decreasing
detail:
#log_min_messages = warning # values in order of decreasing
detail:
#log_min_error_statement = error # values in order of decreasing
detail:
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all
statements
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose
messages
#log_hostname = off
log_line_prefix = '%t ' # special values:
#log_lock_waits = off # log lock waits >=
deadlock_timeout
#log_statement = 'none' # none, ddl, mod, all
#log_temp_files = -1 # log temporary files equal or
larger
#log_timezone = '(defaults to server environment setting)'
#track_activities = on
#track_counts = on
#track_functions = none # none, pl, all
#track_activity_query_size = 1024 # (change requires restart)
#update_process_title = on
#stats_temp_directory = 'pg_stat_tmp'
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
#autovacuum = on # Enable autovacuum subprocess?
'on'
#log_autovacuum_min_duration = -1 # -1 disables, 0 logs all
actions and
#autovacuum_max_workers = 3 # max number of autovacuum
subprocesses
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates
before
#autovacuum_analyze_threshold = 50 # min number of row updates
before
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before
vacuum
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before
analyze
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced
vacuum
#autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
#search_path = '"$user",public' # schema names
#default_tablespace = '' # a tablespace name, '' uses
the default
#temp_tablespaces = '' # a list of tablespace names,
'' uses
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0 # in milliseconds, 0 is disabled
#vacuum_freeze_min_age = 50000000
#vacuum_freeze_table_age = 150000000
#bytea_output = 'hex' # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
#timezone = '(defaults to server environment setting)'
#timezone_abbreviations = 'Default' # Select the set of available
time zone
#extra_float_digits = 0 # min -15, max 3
#client_encoding = sql_ascii # actually, defaults to database
lc_messages = 'en_US.UTF-8' # locale for system
error message
lc_monetary = 'en_US.UTF-8' # locale for monetary
formatting
lc_numeric = 'en_US.UTF-8' # locale for number
formatting
lc_time = 'en_US.UTF-8' # locale for time
formatting
default_text_search_config = 'pg_catalog.english'
#dynamic_library_path = '$libdir'
#local_preload_libraries = ''
#deadlock_timeout = 1s
#max_locks_per_transaction = 64 # min 10
#max_pred_locks_per_transaction = 64 # min 10
#array_nulls = on
#backslash_quote = safe_encoding # on, off, or safe_encoding
#default_with_oids = off
#escape_string_warning = on
#lo_compat_privileges = off
#quote_all_identifiers = off
#sql_inheritance = on
#standard_conforming_strings = on
#synchronize_seqscans = on
#transform_null_equals = off
#exit_on_error = off # terminate session on
any error?
#restart_after_crash = on # reinitialize after
backend crash?
#custom_variable_classes = '' # list of custom variable class
names
default_statistics_target = 50 # pgtune wizard 2013-09-24
maintenance_work_mem = 960MB # pgtune wizard 2013-09-24
constraint_exclusion = on # pgtune wizard 2013-09-24
checkpoint_completion_target = 0.9 # pgtune wizard 2013-09-24
effective_cache_size = 11GB # pgtune wizard 2013-09-24
work_mem = 96MB # pgtune wizard 2013-09-24
wal_buffers = 8MB # pgtune wizard 2013-09-24
checkpoint_segments = 16 # pgtune wizard 2013-09-24
shared_buffers = 3840MB # pgtune wizard 2013-09-24

**$ cat /etc/pgbouncer/pgbouncer.ini**

[databases]
anfitrion = host=127.0.0.1 port=5432 dbname=**** user=****
password=**** client_encoding=UNICODE datestyle=ISO connect_query='SELECT 1'

[pgbouncer]
logfile = /var/log/postgresql/pgbouncer.log
pidfile = /var/run/postgresql/pgbouncer.pid
listen_addr = 127.0.0.1
listen_port = 6432
unix_socket_dir = /var/run/postgresql
auth_type = trust
auth_file = /etc/pgbouncer/userlist.txt
;admin_users = user2, someadmin, otheradmin
;stats_users = stats, root
pool_mode = statement
server_reset_query = DISCARD ALL
;ignore_startup_parameters = extra_float_digits
;server_check_query = select 1
;server_check_delay = 30
; total number of clients that can connect
max_client_conn = 1000
default_pool_size = 80
;reserve_pool_size = 5
;reserve_pool_timeout = 3
;log_connections = 1
;log_disconnections = 1
;log_pooler_errors = 1
;server_round_robin = 0
;server_lifetime = 1200
;server_idle_timeout = 60
;server_connect_timeout = 15
;server_login_retry = 15
;query_timeout = 0
;query_wait_timeout = 0
;client_idle_timeout = 0
;client_login_timeout = 60
;autodb_idle_timeout = 3600
;pkt_buf = 2048
;listen_backlog = 128
;tcp_defer_accept = 0
;tcp_socket_buffer = 0
;tcp_keepalive = 1
;tcp_keepcnt = 0
;tcp_keepidle = 0
;tcp_keepintvl = 0
;dns_max_ttl = 15
;dns_zone_check_period = 0

**$ free -h**
total used free shared buffers cached
Mem: 15G 11G 4.1G 0B 263M 10G
-/+ buffers/cache: 1.2G 14G
Swap: 30G 0B 30G

**$ cat /proc/cpuinfo**

processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
stepping : 9
microcode : 0x15
cpu MHz : 3101.000
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2
ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer
aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm
tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms
bogomips : 6186.05
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
stepping : 9
microcode : 0x15
cpu MHz : 3101.000
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 1
cpu cores : 4
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2
ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer
aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm
tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms
bogomips : 6185.65
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
stepping : 9
microcode : 0x15
cpu MHz : 3101.000
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 2
cpu cores : 4
apicid : 4
initial apicid : 4
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2
ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer
aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm
tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms
bogomips : 6185.66
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

--
Carlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter
GNU Linux Admin | PHP Senior Web Developer
Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794
GTalk: carlos.sotelo.pinto@gmail.com | Skype: csotelop
MSN: carlos.sotelo.pinto@gmail.com | Yahoo: csotelop
GNULinux RU #379182 | GNULinux RM #277661
GPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B

#2Carlos Eduardo Sotelo Pinto
carlos.sotelo.pinto@gmail.com
In reply to: Carlos Eduardo Sotelo Pinto (#1)
Fwd: Help on ṕerformance

Necesito una ayuda en el rendimiento postgresql

He configurado mis archivos PostgreSQL para hacer tunning en mi servidor,
sin embargo, es lento y los recursos cpu son superiores al 120%

Me he quedo sin ideas de cómo resolver este problema, yo estaba tratando de
buscar más información en google, pero no es suficiente, también he tratado
autovacum y reindex db, pero sigue siendo lento

Mi aplicación es un oyente gps que inserta más de 6.000 registros por
minuto mediante un servidor tcp desarrollado en python twisted, donde no
hay problemas, el problema es cuando trato de seguir los dispositivos GPS
en un mapa en una real time, estoy haciendo consultas cada 6 segundos a mi
base de datos desde mi aplicación django hacia la última posición mediante
un procedimiento almacenado, pero la consulta es lento en más de 50
dispositivos, y la CPU empieza a consumir más del 120% de sus recursos

Mi Django App se conecta a la base de datos postgres directamente, y el
servidor escucha TCP para dispositivos se conecta a la base de datos con
pgbouncer, no he utilizando mi aplicación web django en pgbouncer porque no
quiero cruzar la conexión de dispositivos gps en el pgbouncer

Adjunto mi proceso de grabación, mis conf y mi cpu, información de memoria,
espero puedan ayudarme

**Stored procedure**

CREATE OR REPLACE FUNCTION gps_get_live_location (
_imeis varchar(8)
)
RETURNS TABLE (
imei varchar,
device_id integer,
date_time_process timestamp with time zone,
latitude double precision,
longitude double precision,
course smallint,
speed smallint,
mileage integer,
gps_signal smallint,
gsm_signal smallint,
alarm_status boolean,
gsm_status boolean,
vehicle_status boolean,
alarm_over_speed boolean,
other text,
address varchar
) AS $func$
DECLARE
arr varchar[];
BEGIN
arr := regexp_split_to_array(_imeis, E'\\s+');
FOR i IN 1..array_length(arr, 1) LOOP
RETURN QUERY
SELECT
gpstracking_device_tracks.imei,
gpstracking_device_tracks.device_id,
gpstracking_device_tracks.date_time_process,
gpstracking_device_tracks.latitude,
gpstracking_device_tracks.longitude,
gpstracking_device_tracks.course,
gpstracking_device_tracks.speed,
gpstracking_device_tracks.mileage,
gpstracking_device_tracks.gps_signal,
gpstracking_device_tracks.gsm_signal,
gpstracking_device_tracks.alarm_status,
gpstracking_device_tracks.gps_status,
gpstracking_device_tracks.vehicle_status,
gpstracking_device_tracks.alarm_over_speed,
gpstracking_device_tracks.other,
gpstracking_device_tracks.address
FROM gpstracking_device_tracks
WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR
AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',
now())
AND gpstracking_device_tracks.date_time_process <= NOW()
ORDER BY gpstracking_device_tracks.date_time_process DESC
LIMIT 1;
END LOOP;
RETURN;
END;
$func$
LANGUAGE plpgsql VOLATILE SECURITY DEFINER;

**$ cat less /etc/sysctl.conf**

kernel.shmmax = 6871947673
kernel.shmall = 6871947673
fs.file-max = 4194304

**$ cat /etc/postgresql/9.1/main/postgresql.conf**

data_directory = '/var/lib/postgresql/9.1/main' # use data in
another directory
hba_file = '/etc/postgresql/9.1/main/pg_hba.conf' # host-based
authentication file
ident_file = '/etc/postgresql/9.1/main/pg_ident.conf' # ident
configuration file
external_pid_file = '/var/run/postgresql/9.1-main.pid' # write
an extra PID file
listen_addresses = 'localhost' # what IP address(es) to listen
on;
port = 5432 # (change requires restart)
max_connections = 80 # (change requires restart)
superuser_reserved_connections = 3 # (change requires restart)
unix_socket_directory = '/var/run/postgresql' # (change
requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal
notation
#bonjour = off # advertise server via Bonjour
#bonjour_name = '' # defaults to the computer name
ssl = true # (change requires restart)
#ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH' # allowed SSL
ciphers
#ssl_renegotiation_limit = 512MB # amount of data between
renegotiations
#password_encryption = on
#db_user_namespace = off
#krb_server_keyfile = ''
#krb_srvname = 'postgres' # (Kerberos only)
#krb_caseins_users = off
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# shared_buffers = 4096MB # min 128kB
temp_buffers = 16MB # min 800kB
# work_mem = 80MB # min 64kB
# maintenance_work_mem = 2048MB # min 1MB
max_stack_depth = 4MB # min 100kB
#max_files_per_process = 1000 # min 25
#vacuum_cost_delay = 0ms # 0-100 milliseconds
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 10 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100 # 0-1000 max buffers
written/round
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers
scanned/round
#effective_io_concurrency = 1 # 1-1000. 0 disables prefetching
#wal_level = minimal # minimal, archive, or
hot_standby
#fsync = on # turns forced synchronization
on or off
#synchronous_commit = on # synchronization level; on,
off, or local
#wal_sync_method = fsync # the default is the first
option
#full_page_writes = on # recover from partial page
writes
#wal_buffers = -1 # min 32kB, -1 sets based on
shared_buffers
#wal_writer_delay = 200ms # 1-10000 milliseconds
#commit_delay = 0 # range 0-100000, in
microseconds
#commit_siblings = 5 # range 1-1000
# checkpoint_segments = 64 # in logfile segments, min 1,
16MB each
checkpoint_timeout = 5min # range 30s-1h
# checkpoint_completion_target = 0.5 # checkpoint target duration,
0.0 - 1.0
#checkpoint_warning = 30s # 0 disables
#archive_mode = off # allows archiving to be done
#archive_command = '' # command to use to archive a logfile
segment
#archive_timeout = 0 # force a logfile segment switch after
this
#max_wal_senders = 0 # max number of walsender processes
#wal_sender_delay = 1s # walsender cycle time, 1-10000
milliseconds
#wal_keep_segments = 0 # in logfile segments, 16MB each; 0
disables
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is
delayed
#replication_timeout = 60s # in milliseconds; 0 disables
#synchronous_standby_names = '' # standby servers that provide sync rep
#hot_standby = off # "on" allows queries during
recovery
#max_standby_archive_delay = 30s # max delay before canceling
queries
#max_standby_streaming_delay = 30s # max delay before canceling
queries
#wal_receiver_status_interval = 10s # send replies at least this
often
#hot_standby_feedback = off # send info from standby to
prevent
#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_material = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
#seq_page_cost = 1.0 # measured on an arbitrary scale
#random_page_cost = 4.0 # same scale as above
cpu_tuple_cost = 0.01 # same scale as above
cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
# effective_cache_size = 8192MB
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5 # range 1-10
#geqo_pool_size = 0 # selects default based on
effort
#geqo_generations = 0 # selects default based on
effort
#geqo_selection_bias = 2.0 # range 1.5-2.0
#geqo_seed = 0.0 # range 0.0-1.0
#default_statistics_target = 100 # range 1-10000
#constraint_exclusion = partition # on, off, or partition
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit = 8 # 1 disables collapsing of
explicit
#log_destination = 'stderr' # Valid values are combinations
of
#logging_collector = off # Enable capturing of stderr
and csvlog
# These are only used if logging_collector is on:
#log_directory = 'pg_log' # directory where log files are
written,
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name
pattern,
#log_file_mode = 0600 # creation mode for log files,
#log_truncate_on_rotation = off # If on, an existing log file
with the
#log_rotation_age = 1d # Automatic rotation of
logfiles will
#log_rotation_size = 10MB # Automatic rotation of
logfiles will
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#silent_mode = off # Run server silently.
#client_min_messages = notice # values in order of decreasing
detail:
#log_min_messages = warning # values in order of decreasing
detail:
#log_min_error_statement = error # values in order of decreasing
detail:
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all
statements
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose
messages
#log_hostname = off
log_line_prefix = '%t ' # special values:
#log_lock_waits = off # log lock waits >=
deadlock_timeout
#log_statement = 'none' # none, ddl, mod, all
#log_temp_files = -1 # log temporary files equal or
larger
#log_timezone = '(defaults to server environment setting)'
#track_activities = on
#track_counts = on
#track_functions = none # none, pl, all
#track_activity_query_size = 1024 # (change requires restart)
#update_process_title = on
#stats_temp_directory = 'pg_stat_tmp'
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
#autovacuum = on # Enable autovacuum subprocess?
'on'
#log_autovacuum_min_duration = -1 # -1 disables, 0 logs all
actions and
#autovacuum_max_workers = 3 # max number of autovacuum
subprocesses
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates
before
#autovacuum_analyze_threshold = 50 # min number of row updates
before
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before
vacuum
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before
analyze
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced
vacuum
#autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
#search_path = '"$user",public' # schema names
#default_tablespace = '' # a tablespace name, '' uses
the default
#temp_tablespaces = '' # a list of tablespace names,
'' uses
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0 # in milliseconds, 0 is disabled
#vacuum_freeze_min_age = 50000000
#vacuum_freeze_table_age = 150000000
#bytea_output = 'hex' # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
#timezone = '(defaults to server environment setting)'
#timezone_abbreviations = 'Default' # Select the set of available
time zone
#extra_float_digits = 0 # min -15, max 3
#client_encoding = sql_ascii # actually, defaults to database
lc_messages = 'en_US.UTF-8' # locale for system
error message
lc_monetary = 'en_US.UTF-8' # locale for monetary
formatting
lc_numeric = 'en_US.UTF-8' # locale for number
formatting
lc_time = 'en_US.UTF-8' # locale for time
formatting
default_text_search_config = 'pg_catalog.english'
#dynamic_library_path = '$libdir'
#local_preload_libraries = ''
#deadlock_timeout = 1s
#max_locks_per_transaction = 64 # min 10
#max_pred_locks_per_transaction = 64 # min 10
#array_nulls = on
#backslash_quote = safe_encoding # on, off, or safe_encoding
#default_with_oids = off
#escape_string_warning = on
#lo_compat_privileges = off
#quote_all_identifiers = off
#sql_inheritance = on
#standard_conforming_strings = on
#synchronize_seqscans = on
#transform_null_equals = off
#exit_on_error = off # terminate session on
any error?
#restart_after_crash = on # reinitialize after
backend crash?
#custom_variable_classes = '' # list of custom variable class
names
default_statistics_target = 50 # pgtune wizard 2013-09-24
maintenance_work_mem = 960MB # pgtune wizard 2013-09-24
constraint_exclusion = on # pgtune wizard 2013-09-24
checkpoint_completion_target = 0.9 # pgtune wizard 2013-09-24
effective_cache_size = 11GB # pgtune wizard 2013-09-24
work_mem = 96MB # pgtune wizard 2013-09-24
wal_buffers = 8MB # pgtune wizard 2013-09-24
checkpoint_segments = 16 # pgtune wizard 2013-09-24
shared_buffers = 3840MB # pgtune wizard 2013-09-24

**$ cat /etc/pgbouncer/pgbouncer.ini**

[databases]
anfitrion = host=127.0.0.1 port=5432 dbname=**** user=****
password=**** client_encoding=UNICODE datestyle=ISO connect_query='SELECT 1'

[pgbouncer]
logfile = /var/log/postgresql/pgbouncer.log
pidfile = /var/run/postgresql/pgbouncer.pid
listen_addr = 127.0.0.1
listen_port = 6432
unix_socket_dir = /var/run/postgresql
auth_type = trust
auth_file = /etc/pgbouncer/userlist.txt
;admin_users = user2, someadmin, otheradmin
;stats_users = stats, root
pool_mode = statement
server_reset_query = DISCARD ALL
;ignore_startup_parameters = extra_float_digits
;server_check_query = select 1
;server_check_delay = 30
; total number of clients that can connect
max_client_conn = 1000
default_pool_size = 80
;reserve_pool_size = 5
;reserve_pool_timeout = 3
;log_connections = 1
;log_disconnections = 1
;log_pooler_errors = 1
;server_round_robin = 0
;server_lifetime = 1200
;server_idle_timeout = 60
;server_connect_timeout = 15
;server_login_retry = 15
;query_timeout = 0
;query_wait_timeout = 0
;client_idle_timeout = 0
;client_login_timeout = 60
;autodb_idle_timeout = 3600
;pkt_buf = 2048
;listen_backlog = 128
;tcp_defer_accept = 0
;tcp_socket_buffer = 0
;tcp_keepalive = 1
;tcp_keepcnt = 0
;tcp_keepidle = 0
;tcp_keepintvl = 0
;dns_max_ttl = 15
;dns_zone_check_period = 0

**$ free -h**
total used free shared buffers cached
Mem: 15G 11G 4.1G 0B 263M 10G
-/+ buffers/cache: 1.2G 14G
Swap: 30G 0B 30G

**$ cat /proc/cpuinfo**

processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
stepping : 9
microcode : 0x15
cpu MHz : 3101.000
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2
ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer
aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm
tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms
bogomips : 6186.05
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
stepping : 9
microcode : 0x15
cpu MHz : 3101.000
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 1
cpu cores : 4
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2
ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer
aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm
tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms
bogomips : 6185.65
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
stepping : 9
microcode : 0x15
cpu MHz : 3101.000
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 2
cpu cores : 4
apicid : 4
initial apicid : 4
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2
ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer
aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm
tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms
bogomips : 6185.66
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

--
Carlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter
GNU Linux Admin | PHP Senior Web Developer
Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794
GTalk: carlos.sotelo.pinto@gmail.com | Skype: csotelop
MSN: carlos.sotelo.pinto@gmail.com | Yahoo: csotelop
GNULinux RU #379182 | GNULinux RM #277661
GPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B

--
Carlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter
GNU Linux Admin | PHP Senior Web Developer
Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794
GTalk: carlos.sotelo.pinto@gmail.com | Skype: csotelop
MSN: carlos.sotelo.pinto@gmail.com | Yahoo: csotelop
GNULinux RU #379182 | GNULinux RM #277661
GPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B

#3Carlos Eduardo Sotelo Pinto
carlos.sotelo.pinto@gmail.com
In reply to: Carlos Eduardo Sotelo Pinto (#1)
Re: [pgsql-es-ayuda] Fwd: Help on ṕerformance

Gracias Gilberto

Si, hice esto, pero en el caso del procedimiento almacenado, hay un loop y
estoy buscando como mejorarlo

saludos

El 30 de septiembre de 2013 11:25, Gilberto Castillo<
gilberto.castillo@etecsa.cu> escribió:

Necesito una ayuda en el rendimiento postgresql

He configurado mis archivos PostgreSQL para hacer tunning en mi servidor,
sin embargo, es lento y los recursos cpu son superiores al 120%

Me he quedo sin ideas de cómo resolver este problema, yo estaba tratando
de
buscar más información en google, pero no es suficiente, también he
tratado
autovacum y reindex db, pero sigue siendo lento

Mi aplicación es un oyente gps que inserta más de 6.000 registros por
minuto mediante un servidor tcp desarrollado en python twisted, donde no
hay problemas, el problema es cuando trato de seguir los dispositivos GPS
en un mapa en una real time, estoy haciendo consultas cada 6 segundos a

mi

base de datos desde mi aplicación django hacia la última posición

mediante

un procedimiento almacenado, pero la consulta es lento en más de 50
dispositivos, y la CPU empieza a consumir más del 120% de sus recursos

Mi Django App se conecta a la base de datos postgres directamente, y el
servidor escucha TCP para dispositivos se conecta a la base de datos con
pgbouncer, no he utilizando mi aplicación web django en pgbouncer porque
no
quiero cruzar la conexión de dispositivos gps en el pgbouncer

Uhmmmm te sugiero que pases al conexión también por pgbouncer

¿HA hecho explaing analyze de las consultas?

Saludos,
Gilberto Castillo
La Habana, Cuba

---
This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE
running at host imx3.etecsa.cu
Visit our web-site: <http://www.kaspersky.com&gt;, <http://www.viruslist.com&gt;

--
Carlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter
GNU Linux Admin | PHP Senior Web Developer
Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794
GTalk: carlos.sotelo.pinto@gmail.com | Skype: csotelop
MSN: carlos.sotelo.pinto@gmail.com | Yahoo: csotelop
GNULinux RU #379182 | GNULinux RM #277661
GPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B

#4Hellmuth Vargas
hivs77@gmail.com
In reply to: Carlos Eduardo Sotelo Pinto (#1)
Re: [pgsql-es-ayuda] Re: [pgsql-es-ayuda] Fwd: Help on ṕerformance

hola lista

Le sugeriría realizar un diagnostico completo a la plataforma: un
excelente libro para este fin es:
http://www.amazon.com/PostgreSQL-High-Performance-Gregory-Smith/dp/184951030X

Pero si necesita algo mucho mas inmediato.. encontré este blog con algunos
temas interesantes:

http://maauso.wordpress.com/2013/09/18/resumen-instalacion-de-postgresql-9-2-optimizada-para-sistemas-de-produccion/

El 30 de septiembre de 2013 11:25, Gilberto Castillo<
gilberto.castillo@etecsa.cu> escribió:

Necesito una ayuda en el rendimiento postgresql

He configurado mis archivos PostgreSQL para hacer tunning en mi servidor,
sin embargo, es lento y los recursos cpu son superiores al 120%

Me he quedo sin ideas de cómo resolver este problema, yo estaba tratando
de
buscar más información en google, pero no es suficiente, también he
tratado
autovacum y reindex db, pero sigue siendo lento

Mi aplicación es un oyente gps que inserta más de 6.000 registros por
minuto mediante un servidor tcp desarrollado en python twisted, donde no
hay problemas, el problema es cuando trato de seguir los dispositivos GPS
en un mapa en una real time, estoy haciendo consultas cada 6 segundos a

mi

base de datos desde mi aplicación django hacia la última posición

mediante

un procedimiento almacenado, pero la consulta es lento en más de 50
dispositivos, y la CPU empieza a consumir más del 120% de sus recursos

Mi Django App se conecta a la base de datos postgres directamente, y el
servidor escucha TCP para dispositivos se conecta a la base de datos con
pgbouncer, no he utilizando mi aplicación web django en pgbouncer porque
no
quiero cruzar la conexión de dispositivos gps en el pgbouncer

Uhmmmm te sugiero que pases al conexión también por pgbouncer

¿HA hecho explaing analyze de las consultas?

Saludos,
Gilberto Castillo
La Habana, Cuba

---
This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE
running at host imx3.etecsa.cu
Visit our web-site: <http://www.kaspersky.com&gt;, <http://www.viruslist.com&gt;

-
Enviado a la lista de correo pgsql-es-ayuda (pgsql-es-ayuda@postgresql.org
)
Para cambiar tu suscripción:
http://www.postgresql.org/mailpref/pgsql-es-ayuda

--
Cordialmente,

Ing. Hellmuth I. Vargas S.
Esp. Telemática y Negocios por Internet
Oracle Database 10g Administrator Certified Associate
PostgreSQL DBA

#5Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Carlos Eduardo Sotelo Pinto (#2)
Re: Fwd: Help on ṕerformance

Carlos Eduardo Sotelo Pinto escribi�:

DECLARE
arr varchar[];
BEGIN
arr := regexp_split_to_array(_imeis, E'\\s+');
FOR i IN 1..array_length(arr, 1) LOOP
RETURN QUERY

Creo que deber�as hacer una �nica consulta con todos los elementos del
array, en lugar de una consulta para cada elemento. Es decir, elimina
el LOOP y el LIMIT 1, y tu WHERE deber�a ser algo como

...

FROM gpstracking_device_tracks
WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR

WHERE gpstracking_device_tracks.imei = ANY (arr) AND ...

Vas a tener que solucionar de otra forma el que te retorne s�lo una fila
para cada imei, claro.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

-
Enviado a la lista de correo pgsql-es-ayuda (pgsql-es-ayuda@postgresql.org)
Para cambiar tu suscripci�n:
http://www.postgresql.org/mailpref/pgsql-es-ayuda

#6Carlos Eduardo Sotelo Pinto
carlos.sotelo.pinto@gmail.com
In reply to: Alvaro Herrera (#5)
Re: [pgsql-es-ayuda] Fwd: Help on ṕerformance

Hola Alvaro

He solucionado parcialmente el problema haciendo una consulta del tipo

arr := regexp_split_to_array(_imeis, E'\\s+');
RETURN QUERY
SELECT
gpstracking_device_tracks.....
FROM (
SELECT
gpstracking_device_tracks......
ROW_NUMBER() OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY
gpstracking_device_tracks.date_time_process DESC) as rnumber
FROM gpstracking_device_tracks
WHERE gpstracking_device_tracks.imei = ANY(arr)
AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',
now())
AND gpstracking_device_tracks.date_time_process <= NOW()
) AS gpstracking_device_tracks
WHERE gpstracking_device_tracks.rnumber = 1;

Y ahora estoy leyendo un poco de tuning, ya que no soy un dba ni menos un
experto en optimización, pero espero pueda mejorar aun más el rendimiento

Muchas gracias por la ayuda de todos

El 30 de septiembre de 2013 11:12, Alvaro
Herrera<alvherre@2ndquadrant.com>escribió:

Carlos Eduardo Sotelo Pinto escribió:

DECLARE
arr varchar[];
BEGIN
arr := regexp_split_to_array(_imeis, E'\\s+');
FOR i IN 1..array_length(arr, 1) LOOP
RETURN QUERY

Creo que deberías hacer una única consulta con todos los elementos del
array, en lugar de una consulta para cada elemento. Es decir, elimina
el LOOP y el LIMIT 1, y tu WHERE debería ser algo como

...

FROM gpstracking_device_tracks
WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR

WHERE gpstracking_device_tracks.imei = ANY (arr) AND ...

Vas a tener que solucionar de otra forma el que te retorne sólo una fila
para cada imei, claro.

--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Carlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter
GNU Linux Admin | PHP Senior Web Developer
Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794
GTalk: carlos.sotelo.pinto@gmail.com | Skype: csotelop
MSN: carlos.sotelo.pinto@gmail.com | Yahoo: csotelop
GNULinux RU #379182 | GNULinux RM #277661
GPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B

#7Gilberto Castillo
gilberto.castillo@etecsa.cu
In reply to: Carlos Eduardo Sotelo Pinto (#2)
Re: [pgsql-es-ayuda] Fwd: Help on ṕerformance

Necesito una ayuda en el rendimiento postgresql

He configurado mis archivos PostgreSQL para hacer tunning en mi servidor,
sin embargo, es lento y los recursos cpu son superiores al 120%

Me he quedo sin ideas de cómo resolver este problema, yo estaba tratando
de
buscar más información en google, pero no es suficiente, también he
tratado
autovacum y reindex db, pero sigue siendo lento

Mi aplicación es un oyente gps que inserta más de 6.000 registros por
minuto mediante un servidor tcp desarrollado en python twisted, donde no
hay problemas, el problema es cuando trato de seguir los dispositivos GPS
en un mapa en una real time, estoy haciendo consultas cada 6 segundos a mi
base de datos desde mi aplicación django hacia la última posición mediante
un procedimiento almacenado, pero la consulta es lento en más de 50
dispositivos, y la CPU empieza a consumir más del 120% de sus recursos

Mi Django App se conecta a la base de datos postgres directamente, y el
servidor escucha TCP para dispositivos se conecta a la base de datos con
pgbouncer, no he utilizando mi aplicación web django en pgbouncer porque
no
quiero cruzar la conexión de dispositivos gps en el pgbouncer

Uhmmmm te sugiero que pases al conexión también por pgbouncer

¿HA hecho explaing analyze de las consultas?

Saludos,
Gilberto Castillo
La Habana, Cuba

#8Rodriguez Fernando
rodriguez@ort.edu.uy
In reply to: Carlos Eduardo Sotelo Pinto (#6)
Re: [pgsql-es-ayuda] Re: [pgsql-es-ayuda] Fwd: Help on ṕerformance

El 30/09/2013 01:16 p.m., Carlos Eduardo Sotelo Pinto escribió:

Hola Alvaro

He solucionado parcialmente el problema haciendo una consulta del tipo

arr := regexp_split_to_array(_imeis, E'\\s+');
RETURN QUERY
SELECT
gpstracking_device_tracks.....
FROM (
SELECT
gpstracking_device_tracks......
ROW_NUMBER() OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY
gpstracking_device_tracks.date_time_process DESC) as rnumber
FROM gpstracking_device_tracks
WHERE gpstracking_device_tracks.imei = ANY(arr)
AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',
now())
AND gpstracking_device_tracks.date_time_process <= NOW()
) AS gpstracking_device_tracks
WHERE gpstracking_device_tracks.rnumber = 1;

Y ahora estoy leyendo un poco de tuning, ya que no soy un dba ni menos
un experto en optimización, pero espero pueda mejorar aun más el
rendimiento

Muchas gracias por la ayuda de todos

El 30 de septiembre de 2013 11:12, Alvaro
Herrera<alvherre@2ndquadrant.com <mailto:alvherre@2ndquadrant.com>>
escribió:

Carlos Eduardo Sotelo Pinto escribió:

DECLARE
arr varchar[];
BEGIN
arr := regexp_split_to_array(_imeis, E'\\s+');
FOR i IN 1..array_length(arr, 1) LOOP
RETURN QUERY

Creo que deberías hacer una única consulta con todos los elementos del
array, en lugar de una consulta para cada elemento. Es decir, elimina
el LOOP y el LIMIT 1, y tu WHERE debería ser algo como

...

FROM gpstracking_device_tracks
WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR

WHERE gpstracking_device_tracks.imei = ANY (arr) AND ...

Vas a tener que solucionar de otra forma el que te retorne sólo
una fila
para cada imei, claro.

--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Carlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter
GNU Linux Admin | PHP Senior Web Developer
Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794
GTalk: carlos.sotelo.pinto@gmail.com
<mailto:carlos.sotelo.pinto@gmail.com> | Skype: csotelop
MSN: carlos.sotelo.pinto@gmail.com
<mailto:carlos.sotelo.pinto@gmail.com> | Yahoo: csotelop
GNULinux RU #379182 | GNULinux RM #277661
GPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B

que indices tenes en la tabla?, como están armados?
la tabla tiene históricos o la depuraras cada cierto tiempo?

Saludos Fernando

#9Carlos Eduardo Sotelo Pinto
carlos.sotelo.pinto@gmail.com
In reply to: Rodriguez Fernando (#8)
Re: [pgsql-es-ayuda] Re: [pgsql-es-ayuda] Re: [pgsql-es-ayuda] Fwd: Help on ṕerformance

Hola Fernando

No soy experto en el tema, por ahora trato de llevarme de lo que encuentro

- La tabla esta particionada por meses
- indices en el imei y fecha
- no se manejo de historicos

saludos

El 30 de septiembre de 2013 11:31, Rodriguez
Fernando<rodriguez@ort.edu.uy>escribió:

El 30/09/2013 01:16 p.m., Carlos Eduardo Sotelo Pinto escribió:

Hola Alvaro

He solucionado parcialmente el problema haciendo una consulta del tipo

arr := regexp_split_to_array(_imeis, E'\\s+');
RETURN QUERY
SELECT
gpstracking_device_tracks.....
FROM (
SELECT
gpstracking_device_tracks......
ROW_NUMBER() OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY
gpstracking_device_tracks.date_time_process DESC) as rnumber
FROM gpstracking_device_tracks
WHERE gpstracking_device_tracks.imei = ANY(arr)
AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',
now())
AND gpstracking_device_tracks.date_time_process <= NOW()
) AS gpstracking_device_tracks
WHERE gpstracking_device_tracks.rnumber = 1;

Y ahora estoy leyendo un poco de tuning, ya que no soy un dba ni menos
un experto en optimización, pero espero pueda mejorar aun más el rendimiento

Muchas gracias por la ayuda de todos

El 30 de septiembre de 2013 11:12, Alvaro Herrera<alvherre@2ndquadrant.com

escribió:

Carlos Eduardo Sotelo Pinto escribió:

DECLARE
arr varchar[];
BEGIN
arr := regexp_split_to_array(_imeis, E'\\s+');
FOR i IN 1..array_length(arr, 1) LOOP
RETURN QUERY

Creo que deberías hacer una única consulta con todos los elementos del
array, en lugar de una consulta para cada elemento. Es decir, elimina
el LOOP y el LIMIT 1, y tu WHERE debería ser algo como

...

FROM gpstracking_device_tracks
WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR

WHERE gpstracking_device_tracks.imei = ANY (arr) AND ...

Vas a tener que solucionar de otra forma el que te retorne sólo una fila
para cada imei, claro.

--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Carlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter
GNU Linux Admin | PHP Senior Web Developer
Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794
GTalk: carlos.sotelo.pinto@gmail.com | Skype: csotelop
MSN: carlos.sotelo.pinto@gmail.com | Yahoo: csotelop
GNULinux RU #379182 | GNULinux RM #277661
GPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B

que indices tenes en la tabla?, como están armados?
la tabla tiene históricos o la depuraras cada cierto tiempo?

Saludos Fernando

--
Carlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter
GNU Linux Admin | PHP Senior Web Developer
Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794
GTalk: carlos.sotelo.pinto@gmail.com | Skype: csotelop
MSN: carlos.sotelo.pinto@gmail.com | Yahoo: csotelop
GNULinux RU #379182 | GNULinux RM #277661
GPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B

#10Merlin Moncure
mmoncure@gmail.com
In reply to: Carlos Eduardo Sotelo Pinto (#1)
Re: [GENERAL] Help on ṕerformance

On Mon, Sep 30, 2013 at 10:03 AM, Carlos Eduardo Sotelo Pinto
<carlos.sotelo.pinto@gmail.com> wrote:

I need a help on postgresql performance

I have configurate my postgresql files for tunning my server, however it is
slow and cpu resources are highter than 120%

I have no idea on how to solve this issue, I was trying to search more infor
on google but is not enough, I also have try autovacum sentences and reindex
db, but it continues beeing slow

My app is a gps listener that insert more than 6000 records per minutes
using a tcp server developed on python twisted, where there is no problems,
the problem is when I try to follow the gps devices on a map on a relatime,
I am doing queries each 6 seconds to my database from my django app, for
request last position using a stored procedure, but the query get slow on
more than 50 devices and cpu start to using more than 120% of its resources

Django App connect the postgres database directly, and tcp listener server
for teh devices connect database on threaded way using pgbouncer, I have not
using my django web app on pgbouncer caause I dont want to crash gps devices
connection on the pgbouncer

I hoe you could help on get a better performance

I am attaching my store procedure, my conf files and my cpu, memory
information

**Stored procedure**

CREATE OR REPLACE FUNCTION gps_get_live_location (
_imeis varchar(8)
)
RETURNS TABLE (
imei varchar,
device_id integer,
date_time_process timestamp with time zone,
latitude double precision,
longitude double precision,
course smallint,
speed smallint,
mileage integer,
gps_signal smallint,
gsm_signal smallint,
alarm_status boolean,
gsm_status boolean,
vehicle_status boolean,
alarm_over_speed boolean,
other text,
address varchar
) AS $func$
DECLARE
arr varchar[];
BEGIN
arr := regexp_split_to_array(_imeis, E'\\s+');
FOR i IN 1..array_length(arr, 1) LOOP
RETURN QUERY
SELECT
gpstracking_device_tracks.imei,
gpstracking_device_tracks.device_id,
gpstracking_device_tracks.date_time_process,
gpstracking_device_tracks.latitude,
gpstracking_device_tracks.longitude,
gpstracking_device_tracks.course,
gpstracking_device_tracks.speed,
gpstracking_device_tracks.mileage,
gpstracking_device_tracks.gps_signal,
gpstracking_device_tracks.gsm_signal,
gpstracking_device_tracks.alarm_status,
gpstracking_device_tracks.gps_status,
gpstracking_device_tracks.vehicle_status,
gpstracking_device_tracks.alarm_over_speed,
gpstracking_device_tracks.other,
gpstracking_device_tracks.address
FROM gpstracking_device_tracks
WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR
AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',
now())
AND gpstracking_device_tracks.date_time_process <= NOW()
ORDER BY gpstracking_device_tracks.date_time_process DESC
LIMIT 1;
END LOOP;
RETURN;
END;
$func$
LANGUAGE plpgsql VOLATILE SECURITY DEFINER;

Why are you doing this in a loop? What's the point of the LIMIT 1?
You can almost certainly refactor this procedure into a vanilla query.

merlin

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

#11Carlos Eduardo Sotelo Pinto
carlos.sotelo.pinto@gmail.com
In reply to: Merlin Moncure (#10)
Re: [GENERAL] Help on ṕerformance

Thanks to all

I have fix that refactoring the function

BEGIN
arr := regexp_split_to_array(_imeis, E'\\s+');
RETURN QUERY
SELECT
gpstracking_device_tracks.imei,
gpstracking_device_tracks.device_id,
gpstracking_device_tracks.date_time_process,
gpstracking_device_tracks.latitude,
gpstracking_device_tracks.longitude,
gpstracking_device_tracks.course,
gpstracking_device_tracks.speed,
gpstracking_device_tracks.mileage,
gpstracking_device_tracks.gps_signal,
gpstracking_device_tracks.gsm_signal,
gpstracking_device_tracks.alarm_status,
gpstracking_device_tracks.gps_status,
gpstracking_device_tracks.vehicle_status,
gpstracking_device_tracks.alarm_over_speed,
gpstracking_device_tracks.other,
gpstracking_device_tracks.address
FROM (
SELECT
gpstracking_device_tracks.imei,
gpstracking_device_tracks.device_id,
gpstracking_device_tracks.date_time_process,
gpstracking_device_tracks.latitude,
gpstracking_device_tracks.longitude,
gpstracking_device_tracks.course,
gpstracking_device_tracks.speed,
gpstracking_device_tracks.mileage,
gpstracking_device_tracks.gps_signal,
gpstracking_device_tracks.gsm_signal,
gpstracking_device_tracks.alarm_status,
gpstracking_device_tracks.gps_status,
gpstracking_device_tracks.vehicle_status,
gpstracking_device_tracks.alarm_over_speed,
gpstracking_device_tracks.other,
gpstracking_device_tracks.address,
ROW_NUMBER() OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY
gpstracking_device_tracks.date_time_process DESC) as rnumber
FROM gpstracking_device_tracks
WHERE gpstracking_device_tracks.imei = ANY(arr)
AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',
now())
AND gpstracking_device_tracks.date_time_process <= NOW()
) AS gpstracking_device_tracks
WHERE gpstracking_device_tracks.rnumber = 1;
END;

2013/10/2 Merlin Moncure <mmoncure@gmail.com>

On Mon, Sep 30, 2013 at 10:03 AM, Carlos Eduardo Sotelo Pinto
<carlos.sotelo.pinto@gmail.com> wrote:

I need a help on postgresql performance

I have configurate my postgresql files for tunning my server, however it

is

slow and cpu resources are highter than 120%

I have no idea on how to solve this issue, I was trying to search more

infor

on google but is not enough, I also have try autovacum sentences and

reindex

db, but it continues beeing slow

My app is a gps listener that insert more than 6000 records per minutes
using a tcp server developed on python twisted, where there is no

problems,

the problem is when I try to follow the gps devices on a map on a

relatime,

I am doing queries each 6 seconds to my database from my django app, for
request last position using a stored procedure, but the query get slow on
more than 50 devices and cpu start to using more than 120% of its

resources

Django App connect the postgres database directly, and tcp listener

server

for teh devices connect database on threaded way using pgbouncer, I have

not

using my django web app on pgbouncer caause I dont want to crash gps

devices

connection on the pgbouncer

I hoe you could help on get a better performance

I am attaching my store procedure, my conf files and my cpu, memory
information

**Stored procedure**

CREATE OR REPLACE FUNCTION gps_get_live_location (
_imeis varchar(8)
)
RETURNS TABLE (
imei varchar,
device_id integer,
date_time_process timestamp with time zone,
latitude double precision,
longitude double precision,
course smallint,
speed smallint,
mileage integer,
gps_signal smallint,
gsm_signal smallint,
alarm_status boolean,
gsm_status boolean,
vehicle_status boolean,
alarm_over_speed boolean,
other text,
address varchar
) AS $func$
DECLARE
arr varchar[];
BEGIN
arr := regexp_split_to_array(_imeis, E'\\s+');
FOR i IN 1..array_length(arr, 1) LOOP
RETURN QUERY
SELECT
gpstracking_device_tracks.imei,
gpstracking_device_tracks.device_id,
gpstracking_device_tracks.date_time_process,
gpstracking_device_tracks.latitude,
gpstracking_device_tracks.longitude,
gpstracking_device_tracks.course,
gpstracking_device_tracks.speed,
gpstracking_device_tracks.mileage,
gpstracking_device_tracks.gps_signal,
gpstracking_device_tracks.gsm_signal,
gpstracking_device_tracks.alarm_status,
gpstracking_device_tracks.gps_status,
gpstracking_device_tracks.vehicle_status,
gpstracking_device_tracks.alarm_over_speed,
gpstracking_device_tracks.other,
gpstracking_device_tracks.address
FROM gpstracking_device_tracks
WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR
AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',
now())
AND gpstracking_device_tracks.date_time_process <= NOW()
ORDER BY gpstracking_device_tracks.date_time_process DESC
LIMIT 1;
END LOOP;
RETURN;
END;
$func$
LANGUAGE plpgsql VOLATILE SECURITY DEFINER;

Why are you doing this in a loop? What's the point of the LIMIT 1?
You can almost certainly refactor this procedure into a vanilla query.

merlin

--
Carlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter
GNU Linux Admin | PHP Senior Web Developer
Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794
GTalk: carlos.sotelo.pinto@gmail.com | Skype: csotelop
MSN: carlos.sotelo.pinto@gmail.com | Yahoo: csotelop
GNULinux RU #379182 | GNULinux RM #277661
GPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B

#12Rémi Cura
remi.cura@gmail.com
In reply to: Carlos Eduardo Sotelo Pinto (#11)
Re: [GENERAL] Re: [GENERAL] Help on ṕerformance

Hey short trick :
to avoid to use the schema name multiple time (more readable and more easy
to re use).

You can use the
SET search_path gpstracking_device_tracks, public;

(see manual here :
http://www.postgresql.org/docs/current/static/sql-set.html)
Cheers,

Rémi-C

2013/10/2 Carlos Eduardo Sotelo Pinto <carlos.sotelo.pinto@gmail.com>

Show quoted text

Thanks to all

I have fix that refactoring the function

BEGIN
arr := regexp_split_to_array(_imeis, E'\\s+');
RETURN QUERY
SELECT
gpstracking_device_tracks.imei,
gpstracking_device_tracks.device_id,
gpstracking_device_tracks.date_time_process,
gpstracking_device_tracks.latitude,
gpstracking_device_tracks.longitude,
gpstracking_device_tracks.course,
gpstracking_device_tracks.speed,
gpstracking_device_tracks.mileage,
gpstracking_device_tracks.gps_signal,
gpstracking_device_tracks.gsm_signal,
gpstracking_device_tracks.alarm_status,
gpstracking_device_tracks.gps_status,
gpstracking_device_tracks.vehicle_status,
gpstracking_device_tracks.alarm_over_speed,
gpstracking_device_tracks.other,
gpstracking_device_tracks.address
FROM (
SELECT
gpstracking_device_tracks.imei,
gpstracking_device_tracks.device_id,
gpstracking_device_tracks.date_time_process,
gpstracking_device_tracks.latitude,
gpstracking_device_tracks.longitude,
gpstracking_device_tracks.course,
gpstracking_device_tracks.speed,
gpstracking_device_tracks.mileage,
gpstracking_device_tracks.gps_signal,
gpstracking_device_tracks.gsm_signal,
gpstracking_device_tracks.alarm_status,
gpstracking_device_tracks.gps_status,
gpstracking_device_tracks.vehicle_status,
gpstracking_device_tracks.alarm_over_speed,
gpstracking_device_tracks.other,
gpstracking_device_tracks.address,
ROW_NUMBER() OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY
gpstracking_device_tracks.date_time_process DESC) as rnumber
FROM gpstracking_device_tracks
WHERE gpstracking_device_tracks.imei = ANY(arr)
AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',
now())
AND gpstracking_device_tracks.date_time_process <= NOW()
) AS gpstracking_device_tracks
WHERE gpstracking_device_tracks.rnumber = 1;
END;

2013/10/2 Merlin Moncure <mmoncure@gmail.com>

On Mon, Sep 30, 2013 at 10:03 AM, Carlos Eduardo Sotelo Pinto
<carlos.sotelo.pinto@gmail.com> wrote:

I need a help on postgresql performance

I have configurate my postgresql files for tunning my server, however

it is

slow and cpu resources are highter than 120%

I have no idea on how to solve this issue, I was trying to search more

infor

on google but is not enough, I also have try autovacum sentences and

reindex

db, but it continues beeing slow

My app is a gps listener that insert more than 6000 records per minutes
using a tcp server developed on python twisted, where there is no

problems,

the problem is when I try to follow the gps devices on a map on a

relatime,

I am doing queries each 6 seconds to my database from my django app, for
request last position using a stored procedure, but the query get slow

on

more than 50 devices and cpu start to using more than 120% of its

resources

Django App connect the postgres database directly, and tcp listener

server

for teh devices connect database on threaded way using pgbouncer, I

have not

using my django web app on pgbouncer caause I dont want to crash gps

devices

connection on the pgbouncer

I hoe you could help on get a better performance

I am attaching my store procedure, my conf files and my cpu, memory
information

**Stored procedure**

CREATE OR REPLACE FUNCTION gps_get_live_location (
_imeis varchar(8)
)
RETURNS TABLE (
imei varchar,
device_id integer,
date_time_process timestamp with time zone,
latitude double precision,
longitude double precision,
course smallint,
speed smallint,
mileage integer,
gps_signal smallint,
gsm_signal smallint,
alarm_status boolean,
gsm_status boolean,
vehicle_status boolean,
alarm_over_speed boolean,
other text,
address varchar
) AS $func$
DECLARE
arr varchar[];
BEGIN
arr := regexp_split_to_array(_imeis, E'\\s+');
FOR i IN 1..array_length(arr, 1) LOOP
RETURN QUERY
SELECT
gpstracking_device_tracks.imei,
gpstracking_device_tracks.device_id,
gpstracking_device_tracks.date_time_process,
gpstracking_device_tracks.latitude,
gpstracking_device_tracks.longitude,
gpstracking_device_tracks.course,
gpstracking_device_tracks.speed,
gpstracking_device_tracks.mileage,
gpstracking_device_tracks.gps_signal,
gpstracking_device_tracks.gsm_signal,
gpstracking_device_tracks.alarm_status,
gpstracking_device_tracks.gps_status,
gpstracking_device_tracks.vehicle_status,
gpstracking_device_tracks.alarm_over_speed,
gpstracking_device_tracks.other,
gpstracking_device_tracks.address
FROM gpstracking_device_tracks
WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR
AND gpstracking_device_tracks.date_time_process >=

date_trunc('hour',

now())
AND gpstracking_device_tracks.date_time_process <= NOW()
ORDER BY gpstracking_device_tracks.date_time_process DESC
LIMIT 1;
END LOOP;
RETURN;
END;
$func$
LANGUAGE plpgsql VOLATILE SECURITY DEFINER;

Why are you doing this in a loop? What's the point of the LIMIT 1?
You can almost certainly refactor this procedure into a vanilla query.

merlin

--
Carlos Eduardo Sotelo Pinto | http://carlossotelo.com | csotelo@twitter
GNU Linux Admin | PHP Senior Web Developer
Mobil: RPC (Claro)+51, 958194614 | Mov: +51, 959980794
GTalk: carlos.sotelo.pinto@gmail.com | Skype: csotelop
MSN: carlos.sotelo.pinto@gmail.com | Yahoo: csotelop
GNULinux RU #379182 | GNULinux RM #277661
GPG FP:697E FAB8 8E83 1D60 BBFB 2264 9E3D 5761 F855 4F6B