CONTENT
  • CHANGES
Szukaj
counter

#top Przydatne informacje


#top Timeout


Zobacz także Timeout dla: Apache | Nginx | Lighttpd | thttpd | HAProxy | Varnish | SQUID
Zobacz także Timeout dla: ProFTPd | Pure-FTPd | vsftpd | Dovecot | Postfix | OpenLDAP
Zobacz także Timeout dla: pgpool | PostgreSQL | MySQL | Firebird | SQLite | MsSQL | Oracle | DB2 | Informix | Sybase

(Zobacz sekcję Timeout)



#top Database Storage Layout


Zobacz także Database Storage Layout dla: PostgreSQL | MySQL | Firebird | SQLite | MsSQL | Oracle | DB2 | Informix | Sybase

Dokumentacja PostgreSQL: Database File Layout



#top Tablespace


Zobacz także Tablespace dla: PostgreSQL | MySQL | Firebird | SQLite | MsSQL | Oracle | DB2 | Informix | Sybase

Definicja:

Dokumentacja PostgreSQL: Tablespaces

Dokumentacja PostgreSQL: Zapytania SQL: CREATE TABLESPACE | ALTER TABLESPACE | DROP TABLESPACE
Zapytania SQL: Składnia SQL: CREATE TABLESPACE | ALTER TABLESPACE | DROP TABLESPACE



#top SQL JOIN Visual Explanation


Zobacz także SQL JOIN Visual Explanation dla: PostgreSQL | MySQL | Firebird | SQLite | MsSQL | Oracle | DB2 | Informix | Sybase

Dokumentacja PostgreSQL: FROM Clause - join_type

SQL JOIN jak sama nazwa wskazuje jest to zapytanie łączące, a dokładniej zapytanie pobierające dane z więcej niż jednej tabeli łączące wybierane wiersze po wspólnej kolumnie.
Dostępnych jest wiele możliwości łączenia tabel zaleźnych od dopasowania wierszy. Wyróżnić można następujące typy łączenia tabel:
  • [INNER] JOIN - słowo kluczowe INNER jest opcjonalne, złączenia JOIN i INNER JOIN są równoważne,
  • LEFT [OUTER] JOIN - słowo kluczowe OUTER jest opcjonalne, złączenia LEFT JOIN i LET OUTER JOIN są równoważne,
  • RIGHT [OUTER] JOIN - słowo kluczowe OUTER jest opcjonalne, złączenia RIGHT JOIN i RIGHT OUTER JOIN są równoważne,
  • FULL [OUTER] JOIN - słowo kluczowe OUTER jest opcjonalne, złączenia FULL JOIN i FULL OUTER JOIN są równoważne,
  • CROSS JOIN

-- utworzenie tabel
CREATE TABLE join01 (
idjoin integer NOT NULL,
name varchar(255) NOT NULL,
PRIMARY KEY(idjoin)
);

CREATE TABLE join02 (
idjoin integer NOT NULL,
name varchar(255) NOT NULL,
PRIMARY KEY(idjoin)
);

-- dodanie przykładowych danych
INSERT INTO join01(idjoin,name) VALUES(1,'t01name01');
INSERT INTO join01(idjoin,name) VALUES(2,'t01name02');
INSERT INTO join01(idjoin,name) VALUES(3,'t01name03');
INSERT INTO join01(idjoin,name) VALUES(4,'t01name04');
INSERT INTO join01(idjoin,name) VALUES(5,'t01name05');

INSERT INTO join02(idjoin,name) VALUES(3,'t02name03');
INSERT INTO join02(idjoin,name) VALUES(4,'t02name04');
INSERT INTO join02(idjoin,name) VALUES(5,'t02name05');
INSERT INTO join02(idjoin,name) VALUES(6,'t02name06');
INSERT INTO join02(idjoin,name) VALUES(7,'t02name07');



#top SQL JOIN INNER JOIN


Zobacz także SQL JOIN INNER JOIN dla: PostgreSQL | MySQL | Firebird | SQLite | MsSQL | Oracle | DB2 | Informix | Sybase



Przykład bez słowa kluczowego INNER oraz ze słowem kluczowym INNER
SELECT * FROM join01 JOIN join02 ON (join01.idjoin=join02.idjoin);
SELECT * FROM join01 INNER JOIN join02 ON (join01.idjoin=join02.idjoin);
Po wykonaniu powyższego zapytania otrzymany rezultat powinien być analogiczny do poniższego:
 idjoin |   name    | idjoin |   name
--------+-----------+--------+-----------
      3 | t01name03 |      3 | t02name03
      4 | t01name04 |      4 | t02name04
      5 | t01name05 |      5 | t02name05
(3 rows)



#top SQL JOIN LEFT JOIN


Zobacz także SQL JOIN LEFT JOIN dla: PostgreSQL | MySQL | Firebird | SQLite | MsSQL | Oracle | DB2 | Informix | Sybase



Przykład bez słowa kluczowego OUTER
SELECT * FROM join01 LEFT JOIN join02 ON (join01.idjoin=join02.idjoin);
SELECT * FROM join01 LEFT OUTER JOIN join02 ON (join01.idjoin=join02.idjoin);
Po wykonaniu powyższego zapytania otrzymany rezultat powinien być analogiczny do poniższego:
 idjoin |   name    | idjoin |   name
--------+-----------+--------+-----------
      1 | t01name01 |        |
      2 | t01name02 |        |
      3 | t01name03 |      3 | t02name03
      4 | t01name04 |      4 | t02name04
      5 | t01name05 |      5 | t02name05
(5 rows)



#top SQL JOIN RIGHT JOIN


Zobacz także SQL JOIN RIGHT JOIN dla: PostgreSQL | MySQL | Firebird | SQLite | MsSQL | Oracle | DB2 | Informix | Sybase



Przykład bez słowa kluczowego OUTER oraz ze słowem kluczowym OUTER
SELECT * FROM join01 RIGHT JOIN join02 ON (join01.idjoin=join02.idjoin);
SELECT * FROM join01 RIGHT OUTER JOIN join02 ON (join01.idjoin=join02.idjoin);
Po wykonaniu powyższego zapytania otrzymany rezultat powinien być analogiczny do poniższego:
 idjoin |   name    | idjoin |   name
--------+-----------+--------+-----------
      3 | t01name03 |      3 | t02name03
      4 | t01name04 |      4 | t02name04
      5 | t01name05 |      5 | t02name05
        |           |      6 | t02name06
        |           |      7 | t02name07
(5 rows)



#top SQL JOIN FULL JOIN


Zobacz także SQL JOIN FULL JOIN dla: PostgreSQL | MySQL | Firebird | SQLite | MsSQL | Oracle | DB2 | Informix | Sybase



Przykład bez słowa kluczowego OUTER
SELECT * FROM join01 FULL JOIN join02 ON (join01.idjoin=join02.idjoin);
SELECT * FROM join01 FULL OUTER JOIN join02 ON (join01.idjoin=join02.idjoin);
Po wykonaniu powyższego zapytania otrzymany rezultat powinien być analogiczny do poniższego:
 idjoin |   name    | idjoin |   name
--------+-----------+--------+-----------
      1 | t01name01 |        |
      2 | t01name02 |        |
      3 | t01name03 |      3 | t02name03
      4 | t01name04 |      4 | t02name04
      5 | t01name05 |      5 | t02name05
        |           |      6 | t02name06
        |           |      7 | t02name07



#top SQL JOIN LEFT JOIN IS NULL


Zobacz także SQL JOIN LEFT JOIN IS NULL dla: PostgreSQL | MySQL | Firebird | SQLite | MsSQL | Oracle | DB2 | Informix | Sybase



Przykład bez słowa kluczowego OUTER oraz ze słowem kluczowym OUTER
SELECT * FROM join01 LEFT JOIN join02 ON (join01.idjoin=join02.idjoin) WHERE join02.idjoin IS NULL;
SELECT * FROM join01 LEFT OUTER JOIN join02 ON (join01.idjoin=join02.idjoin) WHERE join02.idjoin IS NULL;
Po wykonaniu powyższego zapytania otrzymany rezultat powinien być analogiczny do poniższego:
 idjoin |   name    | idjoin | name
--------+-----------+--------+------
      1 | t01name01 |        |
      2 | t01name02 |        |
(2 rows)



#top SQL JOIN RIGHT JOIN IS NULL


Zobacz także SQL JOIN RIGHT JOIN IS NULL dla: PostgreSQL | MySQL | Firebird | SQLite | MsSQL | Oracle | DB2 | Informix | Sybase



Przykład bez słowa kluczowego OUTER oraz ze słowem kluczowym OUTER
SELECT * FROM join01 RIGHT JOIN join02 ON (join01.idjoin=join02.idjoin) WHERE join01.idjoin IS NULL;
SELECT * FROM join01 RIGHT OUTER JOIN join02 ON (join01.idjoin=join02.idjoin) WHERE join01.idjoin IS NULL;
Po wykonaniu powyższego zapytania otrzymany rezultat powinien być analogiczny do poniższego:
 idjoin | name | idjoin |   name
--------+------+--------+-----------
        |      |      6 | t02name06
        |      |      7 | t02name07
(2 rows)



#top SQL JOIN FULL JOIN IS NULL


Zobacz także SQL JOIN FULL JOIN IS NULL dla: PostgreSQL | MySQL | Firebird | SQLite | MsSQL | Oracle | DB2 | Informix | Sybase



Przykład bez słowa kluczowego OUTER oraz ze słowem kluczowym OUTER
SELECT * FROM join01 FULL JOIN join02 ON (join01.idjoin=join02.idjoin) WHERE join01.idjoin IS NULL OR join02.idjoin IS NULL;
SELECT * FROM join01 FULL OUTER JOIN join02 ON (join01.idjoin=join02.idjoin) WHERE join01.idjoin IS NULL OR join02.idjoin IS NULL;
Po wykonaniu powyższego zapytania otrzymany rezultat powinien być analogiczny do poniższego:
 idjoin |   name    | idjoin |   name
--------+-----------+--------+-----------
      1 | t01name01 |        |
      2 | t01name02 |        |
        |           |      6 | t02name06
        |           |      7 | t02name07
(4 rows)



#top SQL JOIN CROSS JOIN


Zobacz także SQL JOIN CROSS JOIN dla: PostgreSQL | MySQL | Firebird | SQLite | MsSQL | Oracle | DB2 | Informix | Sybase

SELECT * FROM join01 CROSS JOIN join02;
Po wykonaniu powyższego zapytania otrzymany rezultat powinien być analogiczny do poniższego:
 idjoin |   name    | idjoin |   name
--------+-----------+--------+-----------
      1 | t01name01 |      3 | t02name03
      1 | t01name01 |      4 | t02name04
      1 | t01name01 |      5 | t02name05
      1 | t01name01 |      6 | t02name06
      1 | t01name01 |      7 | t02name07
      2 | t01name02 |      3 | t02name03
      2 | t01name02 |      4 | t02name04
      2 | t01name02 |      5 | t02name05
      2 | t01name02 |      6 | t02name06
      2 | t01name02 |      7 | t02name07
      3 | t01name03 |      3 | t02name03
      3 | t01name03 |      4 | t02name04
      3 | t01name03 |      5 | t02name05
      3 | t01name03 |      6 | t02name06
      3 | t01name03 |      7 | t02name07
      4 | t01name04 |      3 | t02name03
      4 | t01name04 |      4 | t02name04
      4 | t01name04 |      5 | t02name05
      4 | t01name04 |      6 | t02name06
      4 | t01name04 |      7 | t02name07
      5 | t01name05 |      3 | t02name03
      5 | t01name05 |      4 | t02name04
      5 | t01name05 |      5 | t02name05
      5 | t01name05 |      6 | t02name06
      5 | t01name05 |      7 | t02name07
(25 rows)



#top Database Objects Size


Zobacz także Database Objects Size dla: PostgreSQL | MySQL | Firebird | SQLite | MsSQL | Oracle | DB2 | Informix | Sybase

Dokumentacja WIKI PostgreSQL: Disk Usage
Dokumentacja PostgreSQL: Database Object Size Functions

Wyświetlenie informacji o bazach danych:
-- wyswietlenie informacji o wszystkich bazach danych oraz ich wielkości
SELECT datname, pg_catalog.pg_database_size(datname) AS pg_database_size FROM pg_catalog.pg_database;
-- wyswietlenie informacji o bazie danych oraz jej wielkości podanej jako argument: dbname
SELECT datname, pg_catalog.pg_database_size(datname) AS pg_database_size FROM pg_catalog.pg_database WHERE datname='dbname';
-- wyswietlenie informacji o wszystkich bazach danych posortowanych wedlug wielkosci od najmniejszej
SELECT datname, pg_catalog.pg_database_size(datname) AS pg_database_size FROM pg_catalog.pg_database ORDER BY pg_catalog.pg_database_size(datname);

Wyświetlenie baz danych oraz ich wielkości:
-- wyswietlenie wszystkich baz danych oraz ich wielkości
SELECT datname, pg_catalog.pg_database_size(datname) AS pg_database_size FROM pg_catalog.pg_database;
-- wyswietlenie informacji o bazach danych oraz ich wielkosci i wielkosci w bardziej czytelnym formacie
SELECT datname, pg_catalog.pg_database_size(datname) AS pg_database_size, pg_catalog.pg_size_pretty(pg_catalog.pg_database_size(datname)) AS pg_database_hrsize FROM pg_catalog.pg_database;

Otrzymany rezultat powinien być analogiczny do poniższego:
  datname  | pg_database_size
-----------+------------------
 template1 |          6005252
 template0 |          6005252
 postgres  |          6005252
 temp      |          6711300
(4 rows)

  datname  | pg_database_size | pg_database_hrsize
-----------+------------------+--------------------
 template1 |          6005252 | 5865 kB
 template0 |          6005252 | 5865 kB
 postgres  |          6005252 | 5865 kB
 temp      |          6711300 | 6554 kB
(4 rows)

Wyświetlenie informacji o bazach danych oraz ich wielkości lub informacji o braku uprawnień do podłączenia do bazy danych:
SELECT datname AS Name, pg_catalog.pg_get_userbyid(datdba) AS Owner, CASE WHEN pg_catalog.has_database_privilege(datname, 'CONNECT') THEN pg_catalog.pg_size_pretty(pg_catalog.pg_database_size(datname)) ELSE 'No Access' END AS SIZE FROM pg_catalog.pg_database ORDER BY CASE WHEN pg_catalog.has_database_privilege(datname, 'CONNECT') THEN pg_catalog.pg_database_size(datname) ELSE NULL END DESC LIMIT 20;

Otrzymany rezultat powinien być analogiczny do poniższego:
   name    |  owner   |  size
-----------+----------+---------
 temp      | temp     | 6554 kB
 template1 | postgres | 5865 kB
 template0 | postgres | 5865 kB
 postgres  | postgres | 5865 kB
(4 rows)



Wyświetlenie obiektów z bazy danych posortowanych według wielkości od największego.
SELECT nspname AS "schemaname", relname AS "relation", CASE c.relkind WHEN 'r' THEN 'table' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN 'v' THEN 'view' WHEN 'f' THEN 'foreign table' WHEN 'c' THEN 'composite type' WHEN 't' THEN 'toast table' END as "type", c.relkind, pg_size_pretty(pg_relation_size(C.oid)) AS "size" FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname NOT IN ('pg_catalog', 'information_schema') ORDER BY pg_relation_size(C.oid) DESC LIMIT 25;

Otrzymany rezultat powinien być analogiczny do poniższego:
 schemaname |            relation             |    type     | relkind |    size
------------+---------------------------------+-------------+---------+------------
 pg_toast   | pg_toast_2618                   | toast table | t       | 264 kB
 pg_toast   | pg_toast_2619                   | toast table | t       | 160 kB
 public     | net_temp_mailstat_queueid_idxh  | index       | i       | 32 kB
 public     | net_temp_mailstat_fromaddr_idxh | index       | i       | 32 kB
 public     | net_temp_mailstat_totoaddr_idxh | index       | i       | 32 kB
 public     | net_temp_mailstat_strdate_idxh  | index       | i       | 32 kB
 public     | words_fulltext_pkey             | index       | i       | 16 kB
 pg_toast   | pg_toast_2619_index             | index       | i       | 16 kB
 public     | temp_limit_pkey                 | index       | i       | 16 kB
 public     | words_fulltext_body_idxg        | index       | i       | 16 kB
 pg_toast   | pg_toast_2618_index             | index       | i       | 16 kB
 public     | words_fulltext                  | table       | r       | 8192 bytes
 pg_toast   | pg_toast_2604_index             | index       | i       | 8192 bytes
 pg_toast   | pg_toast_2606_index             | index       | i       | 8192 bytes
 pg_toast   | pg_toast_2620_index             | index       | i       | 8192 bytes
 pg_toast   | pg_toast_2609_index             | index       | i       | 8192 bytes
 pg_toast   | pg_toast_2964_index             | index       | i       | 8192 bytes
 pg_toast   | pg_toast_2396_index             | index       | i       | 8192 bytes
 pg_toast   | pg_toast_3596_index             | index       | i       | 8192 bytes
 pg_toast   | pg_toast_1255_index             | index       | i       | 8192 bytes
 pg_toast   | pg_toast_1262_index             | index       | i       | 8192 bytes
 pg_toast   | pg_toast_12302_index            | index       | i       | 8192 bytes
 pg_toast   | pg_toast_12307_index            | index       | i       | 8192 bytes
 pg_toast   | pg_toast_12317_index            | index       | i       | 8192 bytes
 pg_toast   | pg_toast_12322_index            | index       | i       | 8192 bytes
(25 rows)



Wyświetlenie obiektów z bazy danych posortowanych według wielkości od największego z pominięciem schematów pg_catalog (System Catalogs), information_schema (The Information Schema) oraz tabel typu toast (Storage - TOAST, wiki:pgsql - TOAST).
SELECT nspname AS "schemaname", relname AS "relation", CASE c.relkind WHEN 'r' THEN 'table' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN 'v' THEN 'view' WHEN 'f' THEN 'foreign table' WHEN 'c' THEN 'composite type' WHEN 't' THEN 'toast table' END as "type", c.relkind, c.relkind, pg_size_pretty(pg_total_relation_size(C.oid)) AS "total_size" FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname NOT IN ('pg_catalog', 'information_schema') AND nspname !~ '^pg_toast' ORDER BY pg_total_relation_size(C.oid) DESC LIMIT 25;
SELECT nspname AS "schemaname", relname AS "relation", CASE c.relkind WHEN 'r' THEN 'table' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN 'v' THEN 'view' WHEN 'f' THEN 'foreign table' WHEN 'c' THEN 'composite type' WHEN 't' THEN 'toast table' END as "type", c.relkind, pg_size_pretty(pg_total_relation_size(C.oid)) AS "total_size" FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname NOT IN ('pg_catalog', 'information_schema') AND nspname NOT LIKE 'pg_toast%' ORDER BY pg_total_relation_size(C.oid) DESC LIMIT 25;

Otrzymany rezultat powinien być analogiczny do poniższego:
 schemaname |            relation             |   type   | relkind | total_size
------------+---------------------------------+----------+---------+------------
 public     | net_temp_mailstat               | table    | r       | 144 kB
 public     | words_fulltext                  | table    | r       | 48 kB
 public     | net_temp_mailstat_queueid_idxh  | index    | i       | 32 kB
 public     | net_temp_mailstat_totoaddr_idxh | index    | i       | 32 kB
 public     | temp_limit                      | table    | r       | 32 kB
 public     | net_temp_mailstat_strdate_idxh  | index    | i       | 32 kB
 public     | net_temp_mailstat_fromaddr_idxh | index    | i       | 32 kB
 public     | temp_limit_pkey                 | index    | i       | 16 kB
 public     | words_fulltext_pkey             | index    | i       | 16 kB
 public     | words_fulltext_body_idxg        | index    | i       | 16 kB
 public     | word_fulltext_idword_seq        | sequence | S       | 8192 bytes
 public     | net_temp_mailstat_nowdate_idxb  | index    | i       | 8192 bytes
(12 rows)



#top Find Multiple Indexes


Zobacz także Find Multiple Indexes dla: PostgreSQL | MySQL | Firebird | SQLite | MsSQL | Oracle | DB2 | Informix | Sybase

Więcej informacji w analogicznym zagadnieniu: Find Duplicate Indexes



#top Find Duplicate Indexes


Zobacz także Find Duplicate Indexes dla: PostgreSQL | MySQL | Firebird | SQLite | MsSQL | Oracle | DB2 | Informix | Sybase

Dokumentacja WIKI PostgreSQL: Duplicate Indexes

Finds multiple indexes that have the same set of columns, same opclass,
expression and predicate -- which make them equivalent.
Usually it's safe to drop one of them, but I give no guarantees. :)

SELECT pg_size_pretty(SUM(pg_relation_size(idx))::BIGINT) AS SIZE, (array_agg(idx))[1] AS idx1, (array_agg(idx))[2] AS idx2, (array_agg(idx))[3] AS idx3, (array_agg(idx))[4] AS idx4
FROM ( SELECT indexrelid::regclass AS idx, (indrelid::text ||E'\n'|| indclass::text ||E'\n'|| indkey::text ||E'\n'|| COALESCE(indexprs::text,'')||E'\n' || COALESCE(indpred::text,'')) AS KEY FROM pg_index ) sub
GROUP BY KEY HAVING COUNT(*)>1 ORDER BY SUM(pg_relation_size(idx)) DESC;

Example:
Po wykonaniu poniższych zapytań mających na celu utworzenie tabeli można wykonać powyższe zapytanie aby sprawdzić oraz wyświetlić listę powtórzonych indeksów, jeśli takowe istnieją:
CREATE TABLE temp_multiidx (
idtemp integer NOT NULL,
name varchar(255) NOT NULL,
value varchar(255) NOT NULL,
PRIMARY KEY (idtemp)
);
CREATE INDEX temp_multiidx_idtemp_idxb ON temp_multiidx USING BTREE(idtemp);

Po wykonaniu powyższych zapytań może pojawić się poniższy komunikat informujący (ostrzeżenie informujące), że na kolumnie zadeklarowanej jako klucz główny (PRIMAY KEY) został automatycznie utworzony INDEX.
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "temp_multiidx_pkey" for table "temp_multiidx"

Uzyskany rezultat powinien być analogiczny do poniższego:
 size  |           idx1            |        idx2        | idx3 | idx4
-------+---------------------------+--------------------+------+------
 16 kB | temp_multiidx_idtemp_idxb | temp_multiidx_pkey |      |
(1 row)
Z powyższej uzyskanego rezultatu wynika, że istnieją dwa indeksy temp_multiidx_idtemp_idxb oraz temp_multiidx_pkey założone na tej samej kolumnie, czyli tak de facto jeden indeks jest nadmiarowy. Pierwszy indeks temp_multiidx_pkey został utworzony automatycznie, przy tworzeniu tabeli został wyświetlony komunikat informujący (ostrzeżenie informujące), że kolumnie zadeklarowanej jako klucz główny (PRIMAY KEY) został automatycznie utworzony INDEX. Drugi indeks temp_multiidx_idtemp_idxb został utworzony poleceniem tworzącym indeks CREATE INDEX. Dla pewności należy jeszcze sprawdzić, czy obydwa indeksy są tego samego typu. Różne typy indeksów mają różne zastosowanie, więc jeśli jeden indeks będzie typu HASH, a drugi będzie typu BTREE, to pomimo, że założone są na tej samej kolumnie i teoretycznie jeden z nich może być nadmiarowy, to mimo wszystko w zależności od rodzaju zapytania mogą być naprzemiennie wykorzystywane. Uzyskane informacje o nadmiarowych indeksach należy jeszcze z informacjami o stopniu wykorzystania indeksów w zapytaniach poprzez sprawdzenie widoku pg_stat_all_indexes. Więcej informacji w następującym zagadnieniu Find Unused Indexes.



#top Find Unused Indexes


Zobacz także Find Unused Indexes dla: PostgreSQL | MySQL | Firebird | SQLite | MsSQL | Oracle | DB2 | Informix | Sybase

Dokumentacja WIKI PostgreSQL: Unused Indexes
Dokumentacja PostgreSQL: Standard Statistics Views - pg_stat_all_indexes

View Name Description
pg_stat_all_indexes For each index in the current database, the table and index OID, schema, table and index name, number of index scans initiated on that index, number of index entries returned by index scans, and number of live table rows fetched by simple index scans using that index.

Z powyższego opisu wynika następujący wniosek, jeśli kolumna idx_scan jest równa zero lub ma niewielką wartość oznacza to, że index jest bardzo rzadko używany przy wyszukiwaniu danych, w związku z czym może zostać bezpiecznie usunięty, celem zaoszczędzenia operacji I/O przy wszelkich modyfikacjach danych w tabeli zwłaszcza dotyczących kolumny na której tej index jest utworzony.

SELECT pg_stat_user_indexes.schemaname, pg_stat_user_indexes.relname AS indexname, indexrelname AS tablename, idx_scan, indisunique
FROM pg_stat_user_indexes JOIN pg_index ON (pg_stat_user_indexes.indexrelid=pg_index.indexrelid)
WHERE idx_scan = 0 AND indisunique IS false;

Example:
Po wykonaniu poniższych zapytań mających na celu utworzenie tabeli można wykonać powyższe zapytanie aby sprawdzić oraz wyświetlić listę nieużywanych indeksów, jeśli takowe istnieją:
CREATE TABLE temp_unusedidx (
idtemp integer NOT NULL,
name varchar(255) NOT NULL,
value varchar(255) NOT NULL,
PRIMARY KEY (idtemp)
);
CREATE INDEX temp_unusedidx_idtemp_idxb ON temp_unusedidx USING BTREE(idtemp);
CREATE INDEX temp_unusedidx_name_idxb ON temp_unusedidx USING BTREE(name);
INSERT INTO temp_unusedidx(idtemp,name,value) VALUES(1,'name-01','value-01');
INSERT INTO temp_unusedidx(idtemp,name,value) VALUES(2,'name-02','value-02');
INSERT INTO temp_unusedidx(idtemp,name,value) VALUES(3,'name-03','value-03');
SELECT * FROM temp_unusedidx WHERE idtemp=1;

Po wykonaniu powyższych zapytań może pojawić się poniższy komunikat informujący (ostrzeżenie informujące), że na kolumnie zadeklarowanej jako PRIMAY KEY został automatycznie dodany INDEX.
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "temp_unusedidx_pkey" for table "temp_unusedidx"

Uzyskany rezultat powinien być analogiczny do poniższego:
 schemaname |   tablename    |        indexname         | idx_scan | indisunique
------------+----------------+--------------------------+----------+-------------
 public     | temp_unusedidx | temp_unusedidx_name_idxb |        0 | f
(1 row)
Z powyższej uzyskanego rezultatu wynika, że tylko indeks temp_unusedidx_name_idxb utworzony na kolumnie name nie został użyty (kolumna idx_scan zawiera wartość 0). Pierwszy indeks temp_unusedidx_pkey został utworzony automatycznie, przy tworzeniu tabeli został wyświetlony komunikat informujący (ostrzeżenie informujące), że kolumnie zadeklarowanej jako klucz główny (PRIMAY KEY) został automatycznie utworzony INDEX. Drugi indeks temp_unusedidx_idtemp_idxb został utworzony poleceniem tworzącym indeks CREATE INDEX. Obydwa te indeksy odnoszą się do tej samej kolumny <cde>idtemp</code>, która występuje w zapytaniu SELECT. Z powyżej uzyskanego rezultatu wynika, że obydwa indeksy zostały użyte w zapytaniu SELECT, gdyż posiadają wartość w kolumnie idx_scan większą od 0, przez co nie zostały ujawione przez powyższe zapytanie poszukujące indeksów o wartości 0 w kolumnie idx_scan.



#top get the data type of value


Dokumentacja PostgreSQL: pg_typeof() - get the data type of any value

Example:
Po wykonaniu poniższych zapytań mających na celu utworzenie tabeli wraz różnymi typami danych:
CREATE TYPE complex AS (
r double precision,
i double precision
);
CREATE TABLE datacolumntypes (
cnull varchar(16) NULL,
cbool boolean NOT NULL,
csint smallint NOT NULL,
cint integer NOT NULL,
cbint bigint NOT NULL,
cdec decimal NOT NULL,
cnum numeric NOT NULL,
creal real NOT NULL,
cdoublep double precision NOT NULL,
cserial serial NOT NULL,
cbserial bigserial NOT NULL,
cmoney money NOT NULL,
cchar char(16) NOT NULL,
cvchar varchar(255) NOT NULL,
cbytea bytea NOT NULL,
cts timestamp NOT NULL,
ctstz timestamp with time zone NOT NULL,
cdate date NOT NULL,
ctime time NOT NULL,
ctimetz time with time zone NOT NULL,
cival interval NOT NULL,
cinet inet NOT NULL,
ccidr cidr NOT NULL,
cmac macaddr NOT NULL,
cbit BIT(3) NOT NULL,
cvbit BIT VARYING(5) NOT NULL,
ciarray integer[][] NOT NULL,
ctarray text[][] NOT NULL,
ccomp complex NOT NULL
);

INSERT INTO datacolumntypes (cnull, cbool, csint, cint, cbint, cdec, cnum, creal, cdoublep, cserial, cbserial, cmoney, cchar, cvchar, cbytea,cts, ctstz, cdate, ctime, ctimetz, cival, cinet, ccidr, cmac, cbit, cvbit, ciarray, ctarray, ccomp)
VALUES (NULL, true, 1, 12, 123, 10, 10, 3.14, 3.14, 123, 123, 1234::text::money, 'textchar', 'textvarchar', 'textbytea', '2017/01/01 12:00:00', '2017/01/01 12:00:00', '2017/01/01', '12:00:00', '12:00:00', '1-2'::interval, '10.5.5.5', '10.5.5.5/32', '08:00:2b:01:02:03', B'100', B'101', '{{1,2},{3,4}}', '{{"123","456"},{"456","123"}}', ROW(1.2,2.4));

Należy sprawdzić zwracane wartości dla każdej sprawdzanej kolumny poprzez wykonanie poniższych zapytań:
SELECT 'NULL' AS typename, pg_typeof(NULL) AS typeval;
SELECT 'true' AS typename, pg_typeof(true) AS typeval;
SELECT 'false' AS typename, pg_typeof(false) AS typeval;
SELECT '123' AS typename, pg_typeof(123) AS typeval;
SELECT '123.456' AS typename, pg_typeof(123.456) AS typeval;
SELECT now() AS typename, pg_typeof(now()) AS typeval;
SELECT current_timestamp AS typename, pg_typeof(current_timestamp) AS typeval;
SELECT current_date AS typename, pg_typeof(current_date) AS typeval;
SELECT current_time AS typename, pg_typeof(current_time) AS typeval;

SELECT 'cvchar' AS typename, pg_typeof('cvchar') AS typeval;
SELECT 'cvchar::varchar' AS typename, pg_typeof('cvchar'::varchar) AS typeval;
SELECT 'cvchar::text' AS typename, pg_typeof('cvchar'::text) AS typeval;
SELECT 'varchar(16)'              AS typename, pg_typeof(cnull)    AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'boolean'                  AS typename, pg_typeof(cbool)    AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'smallint'                 AS typename, pg_typeof(csint)    AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'integer'                  AS typename, pg_typeof(cint)     AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'bigint'                   AS typename, pg_typeof(cbint)    AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'decimal'                  AS typename, pg_typeof(cdec)     AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'numeric'                  AS typename, pg_typeof(cnum)     AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'real'                     AS typename, pg_typeof(creal)    AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'double precision'         AS typename, pg_typeof(cdoublep) AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'serial'                   AS typename, pg_typeof(cserial)  AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'bigserial'                AS typename, pg_typeof(cbserial) AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'money'                    AS typename, pg_typeof(cmoney)   AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'char(16)'                 AS typename, pg_typeof(cchar)    AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'varchar(255)'             AS typename, pg_typeof(cvchar)   AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'bytea'                    AS typename, pg_typeof(cbytea)   AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'timestamp'                AS typename, pg_typeof(cts)      AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'timestamp with time zone' AS typename, pg_typeof(ctstz)    AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'date'                     AS typename, pg_typeof(cdate)    AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'time'                     AS typename, pg_typeof(ctime)    AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'time with time zone'      AS typename, pg_typeof(ctimetz)  AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'interval'                 AS typename, pg_typeof(cival)    AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'inet'                     AS typename, pg_typeof(cinet)    AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'cidr'                     AS typename, pg_typeof(ccidr)    AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'macaddr'                  AS typename, pg_typeof(cmac)     AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'BIT(3)'                   AS typename, pg_typeof(cbit)     AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'BIT VARYING(5)'           AS typename, pg_typeof(cvbit)    AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'integer[][]'              AS typename, pg_typeof(ciarray)  AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'text[][]'                 AS typename, pg_typeof(ctarray)  AS typeval FROM datacolumntypes LIMIT 1;
SELECT 'complex'                  AS typename, pg_typeof(ccomp)    AS typeval FROM datacolumntypes LIMIT 1;

Uzyskany rezultat powinien być analogiczny do poniższego:
 typename | typeval
----------+---------
 NULL     | unknown
(1 wiersz)

 typename | typeval
----------+---------
 true     | boolean
(1 wiersz)

 typename | typeval
----------+---------
 false    | boolean
(1 wiersz)

 typename | typeval
----------+---------
 123      | integer
(1 wiersz)

 typename | typeval
----------+---------
 123.456  | numeric
(1 wiersz)

           typename            |         typeval
-------------------------------+--------------------------
 2017-11-15 22:09:08.609194+01 | timestamp with time zone
(1 wiersz)

           typename            |         typeval
-------------------------------+--------------------------
 2017-11-15 22:09:08.612194+01 | timestamp with time zone
(1 wiersz)

  typename  | typeval
------------+---------
 2017-11-15 | date
(1 wiersz)

      typename      |       typeval
--------------------+---------------------
 22:09:08.617194+01 | time with time zone
(1 wiersz)

 typename | typeval
----------+---------
 cvchar   | unknown
(1 wiersz)

    typename     |      typeval
-----------------+-------------------
 cvchar::varchar | character varying
(1 wiersz)

   typename   | typeval
--------------+---------
 cvchar::text | text
(1 wiersz)

  typename   |      typeval
-------------+-------------------
 varchar(16) | character varying
(1 wiersz)

 typename | typeval
----------+---------
 boolean  | boolean
(1 wiersz)

 typename | typeval
----------+----------
 smallint | smallint
(1 wiersz)

 typename | typeval
----------+---------
 integer  | integer
(1 wiersz)

 typename | typeval
----------+---------
 bigint   | bigint
(1 wiersz)

 typename | typeval
----------+---------
 decimal  | numeric
(1 wiersz)

 typename | typeval
----------+---------
 numeric  | numeric
(1 wiersz)

 typename | typeval
----------+---------
 real     | real
(1 wiersz)

     typename     |     typeval
------------------+------------------
 double precision | double precision
(1 wiersz)

 typename | typeval
----------+---------
 serial   | integer
(1 wiersz)

 typename  | typeval
-----------+---------
 bigserial | bigint
(1 wiersz)

 typename | typeval
----------+---------
 money    | money
(1 wiersz)

 typename |  typeval
----------+-----------
 char(16) | character
(1 wiersz)

   typename   |      typeval
--------------+-------------------
 varchar(255) | character varying
(1 wiersz)

 typename | typeval
----------+---------
 bytea    | bytea
(1 wiersz)

 typename  |           typeval
-----------+-----------------------------
 timestamp | timestamp without time zone
(1 wiersz)

         typename         |         typeval
--------------------------+--------------------------
 timestamp with time zone | timestamp with time zone
(1 wiersz)

 typename | typeval
----------+---------
 date     | date
(1 wiersz)

 typename |        typeval
----------+------------------------
 time     | time without time zone
(1 wiersz)

      typename       |       typeval
---------------------+---------------------
 time with time zone | time with time zone
(1 wiersz)

 typename | typeval
----------+----------
 interval | interval
(1 wiersz)

 typename | typeval
----------+---------
 inet     | inet
(1 wiersz)

 typename | typeval
----------+---------
 cidr     | cidr
(1 wiersz)

 typename | typeval
----------+---------
 macaddr  | macaddr
(1 wiersz)

 typename | typeval
----------+---------
 BIT(3)   | bit
(1 wiersz)

    typename    |   typeval
----------------+-------------
 BIT VARYING(5) | bit varying
(1 wiersz)

  typename   |  typeval
-------------+-----------
 integer[][] | integer[]
(1 wiersz)

 typename | typeval
----------+---------
 text[][] | text[]
(1 wiersz)

 typename | typeval
----------+---------
 complex  | complex
(1 wiersz)



#top Load Balancing with HAProxy


Zobacz także Load Balancing with HAProxy dla: Apache | Nginx | Lighttpd | thttpd | HAProxy | Varnish | SQUID
Zobacz także Load Balancing with HAProxy dla: ProFTPd | Pure-FTPd | vsftpd | Dovecot | Postfix | OpenLDAP
Zobacz także Load Balancing with HAProxy dla: pgpool | PostgreSQL | MySQL | Firebird

Niedotyczy! Niniejsza konfiguracja dotyczy serwerów obsługujących protokół HTTP.
Dla pozostałych protokołów HAProxy oferuje równoważenie ruchu w trybie TCP.



#top TLS SNI


Zobacz także TLS SNI dla: Apache | Nginx | Lighttpd | thttpd | HAProxy | Varnish | SQUID
Zobacz także TLS SNI dla: ProFTPd | Pure-FTPd | vsftpd | Dovecot | Postfix | OpenLDAP
Zobacz także TLS SNI dla: pgpool | PostgreSQL | MySQL | Firebird

Dokumentacja PostgreSQL:



#top SNI config


Zobacz także SNI config dla: Apache | Nginx | Lighttpd | thttpd | HAProxy | Varnish | SQUID
Zobacz także SNI config dla: ProFTPd | Pure-FTPd | vsftpd | Dovecot | Postfix | OpenLDAP
Zobacz także SNI config dla: pgpool | PostgreSQL | MySQL | Firebird



#top SNI check


Zobacz także SNI check dla: Apache | Nginx | Lighttpd | thttpd | HAProxy | Varnish | SQUID
Zobacz także SNI check dla: ProFTPd | Pure-FTPd | vsftpd | Dovecot | Postfix | OpenLDAP
Zobacz także SNI check dla: pgpool | PostgreSQL | MySQL | Firebird



#top Protocol Secure


Zobacz także Protocol Secure dla: Apache | Nginx | Lighttpd | thttpd | HAProxy | Varnish | SQUID
Zobacz także Protocol Secure dla: ProFTPd | Pure-FTPd | vsftpd | Postfix | Dovecot | OpenLDAP
Zobacz także Protocol Secure dla: pgpool | PostgreSQL | MySQL | Firebird



#top Remove Service Version Information


Zobacz także Remove Service Version Information dla: Apache | Nginx | Lighttpd | thttpd | HAProxy | Varnish | SQUID
Zobacz także Remove Service Version Information dla: ProFTPd | Pure-FTPd | vsftpd | Postfix | Dovecot | OpenLDAP
Zobacz także Remove Service Version Information dla: pgpool | PostgreSQL | MySQL | Firebird

Niedotyczy! Tego typu usługi nie są dostępne publicznie. Ze względów bezpieczeństwa dostęp do tego typu usług jest limitowany stosownymi regułami w warstwie sieciowej.



#top Add HTTP Response Headers Security


Zobacz także Add HTTP Response Headers Security dla: Apache | Nginx | Lighttpd | thttpd | HAProxy | Varnish | SQUID
Zobacz także Add HTTP Response Headers Security dla: ProFTPd | Pure-FTPd | vsftpd | Postfix | Dovecot | OpenLDAP
Zobacz także Add HTTP Response Headers Security dla: pgpool | PostgreSQL | MySQL | Firebird

Niedotyczy! Zalecana konfiguracja dotyczy serwerów obsługujących protokół HTTP.



#top TLS Secure


Zobacz także TLS Secure dla: Apache | Nginx | Lighttpd | thttpd | HAProxy | Varnish | SQUID
Zobacz także TLS Secure dla: ProFTPd | Pure-FTPd | vsftpd | Postfix | Dovecot | OpenLDAP
Zobacz także TLS Secure dla: pgpool | PostgreSQL | MySQL | Firebird



#top Disable SSLv2/SSLv3 Protocols


Zobacz także Disable SSLv2/SSLv3 Protocols dla: Apache | Nginx | Lighttpd | thttpd | HAProxy | Varnish | SQUID
Zobacz także Disable SSLv2/SSLv3 Protocols dla: ProFTPd | Pure-FTPd | vsftpd | Postfix | Dovecot | OpenLDAP
Zobacz także Disable SSLv2/SSLv3 Protocols dla: pgpool | PostgreSQL | MySQL | Firebird

(Zobacz sekcję TLS Protocols)
Resolution for POODLE SSLv3.0 vulnerability (CVE-2014-3566)
Vulnerability Summary for CVE-2014-3566



#top Disable weak Cipher Suites


Zobacz także Disable weak Cipher Suites dla: Apache | Nginx | Lighttpd | thttpd | HAProxy | Varnish | SQUID
Zobacz także Disable weak Cipher Suites dla: ProFTPd | Pure-FTPd | vsftpd | Postfix | Dovecot | OpenLDAP
Zobacz także Disable weak Cipher Suites dla: pgpool | PostgreSQL | MySQL | Firebird

(Zobacz sekcję TLS CipherSuite)
MITRE CVE dictionary (CVE-2015-2808)
Vulnerability Summary for CVE-2015-2808
Ivan Ristic Mitigating the BEAST attack on TLS



#top Disable RC4 CipherSuite


Zobacz także Disable RC4 CipherSuite dla: Apache | Nginx | Lighttpd | thttpd | HAProxy | Varnish | SQUID
Zobacz także Disable RC4 CipherSuite dla: ProFTPd | Pure-FTPd | vsftpd | Postfix | Dovecot | OpenLDAP
Zobacz także Disable RC4 CipherSuite dla: pgpool | PostgreSQL | MySQL | Firebird

Więcej informacji w analogicznym zagadnieniu: Disable weak Cipher Suites



#top Disable Anonymous CipherSuite


Zobacz także Disable Anonymous CipherSuite dla: Apache | Nginx | Lighttpd | thttpd | HAProxy | Varnish | SQUID
Zobacz także Disable Anonymous CipherSuite dla: ProFTPd | Pure-FTPd | vsftpd | Postfix | Dovecot | OpenLDAP
Zobacz także Disable Anonymous CipherSuite dla: pgpool | PostgreSQL | MySQL | Firebird

Więcej informacji w analogicznym zagadnieniu: Disable weak Cipher Suites



#top Disable SSL Compression


Zobacz także Disable SSL Compression dla: Apache | Nginx | Lighttpd | thttpd | HAProxy | Varnish | SQUID
Zobacz także Disable SSL Compression dla: ProFTPd | Pure-FTPd | vsftpd | Postfix | Dovecot | OpenLDAP
Zobacz także Disable SSL Compression dla: pgpool | PostgreSQL | MySQL | Firebird

(Zobacz sekcję TLS Compression)
The CRIME attack uses SSL Compression

Bug 857051 - (CRIME, CVE-2012-4929) CVE-2012-4929 SSL/TLS CRIME attack against HTTPS
The openssl packages in Red Hat Enterprise Linux 5 (starting with RHBA-2009:0181 update released in Red Hat Enterprise Linux 5.3) and 6, and also in Fedora, contain a patch that makes the library check if OPENSSL_NO_DEFAULT_ZLIB environment variable is set (can have arbitrary value, even empty string) and disable the default zlib support.

Setting the OPENSSL_NO_DEFAULT_ZLIB environment variable before starting a client or a server application using OpenSSL can be used to disable zlib compression support and hence mitigate this flaw. For example, httpd with mod_ssl has compression enabled by default in Red Hat Enterprise Linux 5 and 6, and hence it is used when client also supports it. Adding the following line to the /etc/sysconfig/httpd file:

export OPENSSL_NO_DEFAULT_ZLIB=1

and restarting the httpd service disables the use of SSL/TLS compression in mod_ssl and the compression will not be negotiated even when connecting client supports it. Note that this environment variable only affects the use of SSL/TLS protocol compression and does not affect the use of HTTP protocol compression implemented by the mod_deflate module.
Przeprowadzone testy wykazały, że oprogramowanie bazujące na dostępnej bibliotece OpenSSL w wersji openssl-0.9.8e-7.el5 w systemie CentOS 5.* oraz w wersji openssl-1.0.0-4.el6 w systemie CentOS 6.* nie obsługuje kompresji, toteż w systemach Linux dystrybucji CentOS 5.* oraz CentOS 6.* nie jest niezbędne wprowadzanie jakichkolwiek zmian w celu wyłączenia obsługi kompresji.

CVE-2012-4929 SSL/TLS CRIME attack against HTTPS
The MITRE CVE dictionary describes this issue as:

The TLS protocol 1.2 and earlier, as used in Mozilla Firefox, Google Chrome, Qt, and other products, can encrypt compressed data without properly obfuscating the length of the unencrypted data, which allows man-in-the-middle attackers to obtain plaintext HTTP headers by observing length differences during a series of guesses in which a string in an HTTP request potentially matches an unknown string in an HTTP header, aka a "CRIME" attack.

Find out more about CVE-2012-4929 from the MITRE CVE dictionary and NIST NVD.

Vulnerability Summary for CVE-2009-1891
The TLS protocol 1.2 and earlier, as used in Mozilla Firefox, Google Chrome, Qt, and other products, can encrypt compressed data without properly obfuscating the length of the unencrypted data, which allows man-in-the-middle attackers to obtain plaintext HTTP headers by observing length differences during a series of guesses in which a string in an HTTP request potentially matches an unknown string in an HTTP header, aka a "CRIME" attack.



#top Set custom DH parameters


Zobacz także Set custom DH parameters dla: Apache | Nginx | Lighttpd | thttpd | HAProxy | Varnish | SQUID
Zobacz także Set custom DH parameters dla: ProFTPd | Pure-FTPd | vsftpd | Postfix | Dovecot | OpenLDAP
Zobacz także Set custom DH parameters dla: pgpool | PostgreSQL | MySQL | Firebird

(Zobacz sekcję TLS Cert/Key File)



#top Avoid certificates with Signature Algorithm: SHA1


Zobacz także Avoid certificates with Signature Algorithm: SHA1 dla: Apache | Nginx | Lighttpd | thttpd | HAProxy | Varnish | SQUID
Zobacz także Avoid certificates with Signature Algorithm: SHA1 dla: ProFTPd | Pure-FTPd | vsftpd | Postfix | Dovecot | OpenLDAP
Zobacz także Avoid certificates with Signature Algorithm: SHA1 dla: pgpool | PostgreSQL | MySQL | Firebird

Mozilla plans to phase out support of SHA-1 hash algorithm
After Jan. 1, 2016, Firefox will present an "Untrusted Connection" error when a newly issued SHA-1 certificate is encountered, and after Jan. 1, 2017, Firefox will present an "Untrusted Connection" error whenever a SHA-1 certificate is encountered at all, according to a Tuesday post.

SHA-1 has been around for nearly two decades, and in recent years researchers have demonstrated SHA-1 mathematical weaknesses that could be exploited given enough time and computing power, Richard Barnes, engineering manager, cryptography and PKI, with Mozilla, told SCMagazine.com in a Wednesday email correspondence.

Mozilla Security Blog
Many of the certificates used by secure websites today are signed using algorithms based on a hash algorithm called SHA-1. The integrity of the hash algorithm used in signing a certificate is a critical element in the security of the certificate. Weaknesses in hash algorithms can lead to situations in which attackers can obtain fraudulent certificates. Mozilla, along with other browser vendors, is working on a plan to phase out support for the SHA-1 hash algorithm.

SHA-1 is nearly twenty years old, and is beginning to show its age. In the last few years, collision attacks undermining some properties of SHA-1 have been getting close to being practical. Collision attacks against the older MD5 hash algorithm have been used to obtain fraudulent certificates, so the improving feasibility of collision attacks against SHA-1 is concerning. In order to avoid the need for a rapid transition should a critical attack against SHA-1 be discovered, we are proactively phasing out SHA-1.



#top Performance Tuning


Zobacz także Performance Tuning dla: PostgreSQL | MySQL | Firebird | SQLite | MsSQL | Oracle | DB2 | Informix | Sybase

Performance Optimization
Tuning Your PostgreSQL Server

#top SYSVIPC


Dokumentacja PostgreSQL: Shared Memory and Semaphores
System V IPC parameters

Name Description Reasonable values
SHMMAX Maximum size of shared memory segment (bytes) at least several megabytes (see text)
SHMMIN Minimum size of shared memory segment (bytes) 1
SHMALL Total amount of shared memory available (bytes or pages) if bytes, same as SHMMAX; if pages, ceil(SHMMAX/PAGE_SIZE)

shared_buffers
Sets the amount of memory the database server uses for shared memory buffers.
The default is typically 32 megabytes (32MB), but might be less if your kernel
settings will not support it (as determined during initdb). This setting must
be at least 128 kilobytes. (Non-default values of BLCKSZ change the minimum.)
However, settings significantly higher than the minimum are usually needed for
good performance. This parameter can only be set at server start.

If you have a dedicated database server with 1GB or more of RAM, a reasonable
starting value for shared_buffers is 25% of the memory in your system. There
are some workloads where even large settings for shared_buffers are effective,
but because PostgreSQL also relies on the operating system cache, it is
unlikely that an allocation of more than 40% of RAM to shared_buffers will
work better than a smaller amount. Larger settings for shared_buffers usually
require a corresponding increase in checkpoint_segments, in order to spread
out the process of writing large quantities of new or changed data over a
longer period of time.

On systems with less than 1GB of RAM, a smaller percentage of RAM is
appropriate, so as to leave adequate space for the operating system. Also, on
Windows, large values for shared_buffers aren't as effective. You may find
better results keeping the setting relatively low and using the operating
system cache more instead. The useful range for shared_buffers on Windows
systems is generally from 64MB to 512MB.

Increasing this parameter might cause PostgreSQL to request more System V shared memory than your operating system's default configuration allows. See Section 17.4.1 for information on how to adjust those parameters, if necessary.

# used for managing connection, active operations
# should be 1/4 of the available memory
shared_buffers = 1024MB



work_mem
Specifies the amount of memory to be used by internal sort operations and hash tables before switching to temporary disk files. The value defaults to one megabyte (1MB). Note that for a complex query, several sort or hash operations might be running in parallel; each one will be allowed to use as much memory as this value specifies before it starts to put data into temporary files. Also, several running sessions could be doing such operations concurrently. So the total memory used could be many times the value of work_mem; it is necessary to keep this fact in mind when choosing the value. Sort operations are used for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and hash-based processing of IN subqueries.

# dedicated memory for each operation. Used basically for sorting
# should be available_memory/max_connections for normal operations. Max
# available_memory/(2*max_connections)
# another way of getting this number is using the EXPLAIN ANALYZE query. If
# the plan shows "sort method: external merge disk: 7532kb", then work_mem of
# 8Mb can do wonders.
work_mem = 1MB



maintenance_work_mem
Specifies the maximum amount of memory to be used in maintenance operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY. It defaults to 16 megabytes (16MB). Since only one of these operations can be executed at a time by a database session, and an installation normally doesn't have many of them running concurrently, it's safe to set this value significantly larger than work_mem. Larger settings might improve performance for vacuuming and for restoring database dumps.

Note that when autovacuum runs, up to autovacuum_max_workers times this memory may be allocated, so be careful not to set the default value too high.

# same as work_mem but for vaccum, alter, other ddl qry.
# should be around 256MB
maintenance_work_mem = 256MB



wal_buffers
The amount of memory used in shared memory for WAL data. The default is 64 kilobytes (64kB). The setting need only be large enough to hold the amount of WAL data generated by one typical transaction, since the data is written out to disk at every transaction commit. This parameter can only be set at server start.

Increasing this parameter might cause PostgreSQL to request more System V shared memory than your operating system's default configuration allows. See Section 17.4.1 for information on how to adjust those parameters, if necessary.

# size of write ahead log files
# default 8 KB. 1 MB is enough for large systems.
# SMP machines are better with 8 MB
wal_buffers = 1MB



wal_sync_method
Method used for forcing WAL updates out to disk. If fsync is off then this setting is irrelevant, since updates will not be forced out at all. Possible values are:
  • open_datasync (write WAL files with open() option O_DSYNC)
  • fdatasync (call fdatasync() at each commit)
  • fsync (call fsync() at each commit)
  • fsync_writethrough (call fsync() at each commit, forcing write-through of any disk write cache)
  • open_sync (write WAL files with open() option O_SYNC)
The open_* options also use O_DIRECT if available. Not all of these choices are available on all platforms. The default is the first method in the above list that is supported by the platform, except that fdatasync is the default on Linux. The default is not necessarily ideal; it might be necessary to change this setting or other aspects of your system configuration in order to create a crash-safe configuration or achieve optimal performance. These aspects are discussed in Section 28.1. This parameter can only be set in the postgresql.conf file or on the server command line.

# After every transaction, pgsql forces a commit to disk out its write-ahead log.
# defaults to fsync.
# generally switched to open_sync, but it is buggy on many platforms.
# Should be benchmarked with very heavy query, before switching.
wal_sync_method = fsync



effective_cache_size
Sets the planner's assumption about the effective size of the disk cache that is available to a single query. This is factored into estimates of the cost of using an index; a higher value makes it more likely index scans will be used, a lower value makes it more likely sequential scans will be used. When setting this parameter you should consider both PostgreSQL's shared buffers and the portion of the kernel's disk cache that will be used for PostgreSQL data files. Also, take into account the expected number of concurrent queries on different tables, since they will have to share the available space. This parameter has no effect on the size of shared memory allocated by PostgreSQL, nor does it reserve kernel disk cache; it is used only for estimation purposes. The default is 128 megabytes (128MB).

# estimate of how much memory is available for disk caching by the OS and
# within the DB itself
# recomended to 1/2 of available memory. On unix can be set to free+cached
# from "free".
effective_cache_size = 512MB



checkpoint_segments
Maximum number of log file segments between automatic WAL checkpoints (each segment is normally 16 megabytes). The default is three segments. Increasing this parameter can increase the amount of time needed for crash recovery. This parameter can only be set in the postgresql.conf file or on the server command line.

# by default 3*16MB per segment = 48 MB. Can be resource intensive on modern systems.
# setting it to 32 - checkpoint every 512 MB can be effective in reducing disk io
checkpoint_segments = 32



checkpoint_timeout
Maximum time between automatic WAL checkpoints, in seconds. The default is five minutes (5min). Increasing this parameter can increase the amount of time needed for crash recovery. This parameter can only be set in the postgresql.conf file or on the server command line.

# checkpoint occurs every 5 minutes. can be increased
checkpoint_timeout = 5min



default_statistics_target
Sets the default statistics target for table columns that have not had a column-specific target set via ALTER TABLE SET STATISTICS. Larger values increase the time needed to do ANALYZE, but might improve the quality of the planner's estimates. The default is 100. For more information on the use of statistics by the PostgreSQL query planner, refer to Section 14.2.

# should be increased if you want to collect a lot of information for helping
# pgsql to create query plans
default_statistics_target=100



synchronous_commit
Specifies whether transaction commit will wait for WAL records to be written to disk before the command returns a "success" indication to the client. The default, and safe, setting is on. When off, there can be a delay between when success is reported to the client and when the transaction is really guaranteed to be safe against a server crash. (The maximum delay is three times wal_writer_delay.) Unlike fsync, setting this parameter to off does not create any risk of database inconsistency: a crash might result in some recent allegedly-committed transactions being lost, but the database state will be just the same as if those transactions had been aborted cleanly. So, turning synchronous_commit off can be a useful alternative when performance is more important than exact certainty about the durability of a transaction. For more discussion see Section 28.3.

This parameter can be changed at any time; the behavior for any one transaction is determined by the setting in effect when it commits. It is therefore possible, and useful, to have some transactions commit synchronously and others asynchronously. For example, to make a single multi-statement transaction commit asynchronously when the default is the opposite, issue SET LOCAL synchronous_commit TO OFF within the transaction.

# Synchronous commit introduced in pgsql 8.3 allows a small amount of data
# loss (in case of failure) for large boost in number of updates on the database per second.
synchronous_commit=on



random_page_cost
Sets the planner's estimate of the cost of a non-sequentially-fetched disk page. The default is 4.0. Reducing this value relative to seq_page_cost will cause the system to prefer index scans; raising it will make index scans look relatively more expensive. You can raise or lower both values together to change the importance of disk I/O costs relative to CPU costs, which are described by the following parameters.

# If after tweaking every variable, your query execution plan still is not acceptable,
# you can bring down the random page cost to 3.0 or 2.0.
# setting this variable lower will encourage the query optimizer to use random access index scans.
random_page_cost=4.0



temp_buffers
Sets the maximum number of temporary buffers used by each database session. These are session-local buffers used only for access to temporary tables. The default is eight megabytes (8MB). The setting can be changed within individual sessions, but only up until the first use of temporary tables within a session; subsequent attempts to change the value will have no effect on that session.

A session will allocate temporary buffers as needed up to the limit given by temp_buffers. The cost of setting a large value in sessions that do not actually need a lot of temporary buffers is only a buffer descriptor, or about 64 bytes, per increment in temp_buffers. However if a buffer is actually used an additional 8192 bytes will be consumed for it (or in general, BLCKSZ bytes).



max_prepared_transaction
Sets the maximum number of transactions that can be in the "prepared" state simultaneously (see PREPARE TRANSACTION). Setting this parameter to zero (which is the default) disables the prepared-transaction feature. This parameter can only be set at server start.

If you are not planning to use prepared transactions, this parameter should be set to zero to prevent accidental creation of prepared transactions. If you are using prepared transactions, you will probably want max_prepared_transactions to be at least as large as max_connections, so that every session can have a prepared transaction pending.

Increasing this parameter might cause PostgreSQL to request more System V shared memory than your operating system's default configuration allows. See Section 17.4.1 for information on how to adjust those parameters, if necessary.












































Zmodyfikowany ostatnio: 2018/04/11 23:28:37 (6 lat temu), textsize: 82,9 kB, htmlsize: 129 kB

Zapraszam do komentowania, zgłaszania sugestii, propozycji, własnych przykładów, ...
Dodaj komentarzKomentarze użytkowników