tag:blogger.com,1999:blog-53072624173972504102024-03-14T02:16:43.829+03:00oracle.notesЧто бы не забыть ничего полезногоAndrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.comBlogger244125tag:blogger.com,1999:blog-5307262417397250410.post-10138054413834189952022-12-05T12:40:00.002+03:002022-12-05T12:40:47.320+03:00Powershell: docker - print tags for image<pre><code>$imageName='camunda/camunda-bpm-platform'
$nextLink="https://registry.hub.docker.com/v2/repositories/$imageName/tags"
echo "Receiving: $nextLink"
$res=$null
Do {
$content = curl $nextLink | ConvertFrom-Json
$nextLink = $content.next
#echo $nextLink
$res+= $content.results.name
} While ($nextLink)
echo $res | sort
</code></pre>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-82280620923735727292022-11-25T14:07:00.002+03:002022-11-25T14:07:03.597+03:00Spring enum properties possible values<p>From <a href="https://stackoverflow.com/a/45997666/3836946">this</a> reply</p>
<pre><code>RelaxedDataBinder#bind()
RelaxedConversionService#convert()
1. try DefaultConvertionService#convert()
# only support `A_VALUE`
StringToEnumConverterFactory#StringToEnum#convert()
2. then GenericConversionService#convert()
# the config key can be :
# 0 = "a-value"
# 1 = "a_value"
# 2 = "aValue"
# 3 = "avalue"
# 4 = "A-VALUE"
# 5 = "A_VALUE"
# 6 = "AVALUE"
RelaxedConversionService$StringToEnumIgnoringCaseConverterFactory$StringToEnum#convert()
</code></pre>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-81317837680298298022022-08-26T12:25:00.002+03:002022-08-26T12:25:12.308+03:00Oracle 19c: JSON and character sets<h3 id="given">Given</h3>
<p>Oracle 19c in WE8MSWIN1252 charset.<br>
Processing the JSON in UTF-8 using different functions.</p>
<h3 id="test">Test</h3>
<pre><code>DECLARE
c_json_utf8 VARCHAR2(32000) := CONVERT('{"a":"' || CHR(246) || '"}', 'AL32UTF8');
l_obj json_object_t;
BEGIN
l_obj := NEW json_object_t(c_json_utf8);
dbms_output.put_line('json_object_t.get_string: ' || l_obj.get_string('a'));
dbms_output.put_line('json_value: ' || json_value(c_json_utf8, '$.a'));
FOR rec IN (
SELECT *
FROM JSON_TABLE(c_json_utf8, '$'
COLUMNS(a VARCHAR2 path '$.a')
)
) LOOP
dbms_output.put_line('json_table: ' || rec.a);
END LOOP;
END;
/
</code></pre>
<h3 id="result">Result</h3>
<pre><code>json_object_t.get_string: ö
json_value: ö
json_table: ö
</code></pre>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-79091652601419904362022-06-14T15:10:00.001+03:002022-06-14T15:10:27.654+03:00Extract information about deadlocks from traces<p>If you don’t have access to filesystem or love to do everything in SQL here is the quick and dirty solution how to extract information about ORA-00060 from Oracle trace files.<br>
I’m to lazy to write a normal script with variables thus just add the comments in the code</p>
<pre><code>-- Get trace files dir
SELECT VALUE FROM v$diag_info WHERE NAME = 'Diag Trace';
-- create directory object for trace files dir
CREATE DIRECTORY test_trace_dir AS '&VALUE_FROM_PREVIOUS_STEP';
-- file to clob extractor procedure was found in stackoverflow
CREATE OR REPLACE FUNCTION test_trace_read(p_file_name IN VARCHAR2) RETURN CLOB AS
l_clob CLOB;
l_bfile BFILE;
dst_offset NUMBER := 1;
src_offset NUMBER := 1;
lang_ctx NUMBER := dbms_lob.default_lang_ctx;
warning NUMBER;
BEGIN
l_bfile := bfilename('TEST_TRACE_DIR', p_file_name);
dbms_lob.fileopen(l_bfile);
dbms_lob.createtemporary(l_clob, TRUE);
dbms_lob.loadclobfromfile(dest_lob => l_clob,
src_bfile => l_bfile,
amount => dbms_lob.getlength(l_bfile),
dest_offset => dst_offset,
src_offset => src_offset,
bfile_csid => dbms_lob.default_csid,
lang_context => lang_ctx,
warning => warning);
dbms_lob.fileclose(l_bfile);
RETURN l_clob;
END;
/
-- query to return piece of tracefile
-- full tracefile content can be returned as well
-- number of rows is specified in variable
WITH traces AS (
SELECT t.originating_timestamp, t.module_id, t.message_text,
regexp_replace(t.message_text, '.*More info in file (.*)\.$', '\1') full_file_name,
regexp_replace(t.message_text, '.*More info in file .*\/(.*)\.\s*$', '\1') file_name -- related from message format!
FROM V$DIAG_ALERT_EXT t
WHERE message_text LIKE '%ORA-00060%'
ORDER BY 1 DESC
),
trace_files AS (
SELECT test_trace_read(file_name) CONTENT, t.*
FROM traces t
WHERE ROWNUM <= &NUM_ROWS
)
SELECT SUBSTR(CONTENT, INSTR(CONTENT, 'Deadlock graph:'), INSTR(CONTENT, '----- Call Stack Trace -----') - INSTR(CONTENT, 'Deadlock graph:')) substr_trace,
t.originating_timestamp, t.module_id, t.full_file_name, t.message_text
FROM trace_files t
;
</code></pre>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-83203813296477103142022-06-08T17:57:00.004+03:002022-06-08T17:57:31.204+03:00Powershell: basic listener<p>Found <a href="https://github.com/microsoft/WSL/issues/4585#issuecomment-540611755">here</a></p>
<pre><code>$Listener = [System.Net.Sockets.TcpListener]8080;
$Listener.Start();
$Listener.AcceptSocket();
</code></pre>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-52204157471059529742022-05-09T16:09:00.003+03:002022-09-06T16:53:33.241+03:00Jenkins: print password<p>stage('Sandbox') {<br>
withCredentials([usernamePassword(credentialsId: 'TestUser', passwordVariable: 'C_PASS', usernameVariable: 'C_USER')]) {<br>
creds = "\nUser: ${C_USER}\nPassword: ${C_PASS}\n"<br>
}<br>
println creds<br>
}</p>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-89415709874682188472022-05-02T20:31:00.001+03:002022-05-02T20:31:28.831+03:00GIT: Run the changes from current branch<p>Run the difference between the current feature branch and target branch (configured to dev) to which the feature branches are merged in the current dev process.<br>
The $file_list array is built with the files:</p>
<ul>
<li>from merge commit if feature branch is already merged</li>
<li>from fork-point if the branch is not merged<br>
In my scenario all the files are run in SQL*PLUS, but $file_list variable can be used for any purpose</li>
</ul>
<pre><code># Run changes made in current branch comparing to $traget_branch_name = dev branch in the database given in the first parameter
# If changes is already merged thus it merge-base --fork-point produce wrong results - the git log -m -1 --first-parent is used for merge commit to find difference
# If no merge commit present merge-base --fork-point is used to build the difference
$traget_branch_name = 'dev'
$repo = 'C:\code\myrepo'
cd $repo
$dbname=$args[0]
$script = "connect $dbname"
$merge_commit = git log HEAD..$traget_branch_name --ancestry-path --merges --reverse --pretty=format:%h | Select-Object -First 1
if ( $merge_commit -eq $null )
{
'...No subsequent merges...'
$file_list = (git diff --name-only $(git merge-base --fork-point $traget_branch_name))
} else {
"Merge commit found $merge_commit"
$file_list = git log -m -1 --first-parent --name-only --pretty="format:" $merge_commit
}
$file_list
#process file list in sql*plus
$file_list |
ForEach {
$fullname = "@$repo$_"
$script += "PROMPT Running $fullname`n"
$script += "@$fullname`n"
}
$script += 'EXIT'
echo $script
echo $script | sqlplus /nolog
</code></pre>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-56365673572037527702022-03-24T16:01:00.001+03:002022-03-24T16:02:00.711+03:00Print JSON attributes names and types<p>Script to traverse and print types (except one with null values) and change values to attribute names (useful for testing purposes)</p>
<pre><code>DECLARE
l_clob CLOB;
l_json json_object_t;
l_number NUMBER := 1;
FUNCTION process_object(p_json json_object_t, p_level NUMBER := 0, p_parent_object_name VARCHAR2 := NULL) RETURN json_object_t IS
l_keys JSON_KEY_LIST;
l_padding VARCHAR2(100):= LPAD(' ', p_level * 3, ' ');
l_result json_object_t := p_json;
BEGIN
l_keys := p_json.get_keys;
FOR i IN 1 .. l_keys.count LOOP
IF p_json.get(l_keys(i)).is_object THEN
dbms_output.put_line(l_padding || l_keys(i) || ' is object');
l_result.put(
l_keys(i),
process_object(TREAT(p_json.get(l_keys(i)) AS json_object_t), p_level + 1, LTRIM(p_parent_object_name || '.' || l_keys(i), '.'))
);
ELSIF p_json.get(l_keys(i)).is_number THEN
dbms_output.put_line(l_padding || l_keys(i) || ' is number');
l_result.put(l_keys(i), l_number);
l_number := l_number + 1;
--
ELSIF p_json.get(l_keys(i)).is_string THEN
dbms_output.put_line(l_padding || l_keys(i) || ' is string');
l_result.put(l_keys(i), LTRIM(p_parent_object_name || '_', '_') || l_keys(i));
ELSIF p_json.get(l_keys(i)).is_array THEN
dbms_output.put_line(l_padding || l_keys(i) || ' is array');
DECLARE
l_arr JSON_ARRAY_T := TREAT(p_json.get(l_keys(i)) AS JSON_ARRAY_T);
l_new_array JSON_ARRAY_T := NEW JSON_ARRAY_T;
BEGIN
FOR j IN 0 .. l_arr.get_size - 1 LOOP
IF l_arr.get(j).is_object THEN
l_new_array.append(
process_object(TREAT(l_arr.get(j) AS json_object_t), p_level + 1, LTRIM(p_parent_object_name || '.' || l_keys(i) || '[' || j || ']', '.'))
);
END IF;
-- check other types?
END LOOP;
l_result.put(l_keys(i), l_new_array);
END;
ELSE
dbms_output.put_line(l_padding || l_keys(i) || ' is unknown');
l_result.put(l_keys(i), LTRIM(p_parent_object_name || '_', '_') || l_keys(i));
END IF;
END LOOP;
RETURN l_result;
END;
BEGIN
l_json := json_object_t.parse(l_clob);
dbms_output.put_line( process_object(l_json).stringify );
END;
/
</code></pre>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-70806898701459058272021-12-17T18:18:00.004+03:002021-12-17T18:18:56.588+03:00Generate queries for PK-FK data check<p>Given:</p>
<ul>
<li>table to be inspected
<ul>
<li>l_owner</li>
<li>l_table_name</li>
</ul>
</li>
<li>row in this table
<ul>
<li>l_table_condition<br>
Result: generated queries to extract parent rows (Reference to section) and child rows (Referenced by section)</li>
</ul>
</li>
</ul>
<pre><code>DECLARE
l_owner VARCHAR2(30) := UPPER('schema_name');
l_table_name VARCHAR2(30) := UPPER('table_name');
l_table_condition VARCHAR2(32767) := 't.id = 133';
l_chi_cols VARCHAR2(32767);
l_par_cols VARCHAR2(32767);
FUNCTION get_constraint_columns(p_cons_owner VARCHAR2, p_cons_name VARCHAR2) RETURN VARCHAR2 IS
l_result VARCHAR2(32767);
BEGIN
SELECT LISTAGG(cols.column_name, ', ') WITHIN GROUP (ORDER BY cols.position)
INTO l_result
FROM dba_cons_columns cols
WHERE cols.owner = p_cons_owner
AND cols.constraint_name = p_cons_name;
RETURN l_result;
END;
BEGIN
dbms_output.put_line('-- Reference to');
dbms_output.put_line(LPAD('-', 10, '-'));
FOR rec IN (
SELECT c.owner chi_owner, c.table_name chi_table_name, c.constraint_name chi_constraint_name
, p.owner par_owner, p.table_name par_table_name, p.constraint_name par_constraint_name
FROM dba_constraints c,
dba_constraints p
WHERE c.owner = l_owner
AND c.table_name = l_table_name
AND c.constraint_type = 'R'
AND p.owner = c.r_owner
AND p.constraint_name = c.r_constraint_name
) LOOP
l_chi_cols := get_constraint_columns(rec.chi_owner, rec.chi_constraint_name);
l_par_cols := get_constraint_columns(rec.par_owner, rec.par_constraint_name);
dbms_output.new_line;
dbms_output.put_line('-- ' || rec.chi_owner || '.' || rec.chi_table_name || '.' || l_chi_cols || ' -> '
|| rec.par_owner || '.' || rec.par_table_name || '.' || l_par_cols);
dbms_output.put_line(
'SELECT * ' || CHR(10) ||
'FROM ' || rec.par_owner || '.' || rec.par_table_name || ' p' || CHR(10) ||
'WHERE (' || l_par_cols || ') IN (' || CHR(10) ||
' SELECT ' || l_chi_cols || CHR(10) ||
' FROM ' || rec.chi_owner || '.' || rec.chi_table_name || ' t' || CHR(10) ||
' WHERE ' || l_table_condition || CHR(10) ||
');'
);
END LOOP;
dbms_output.put_line('-- Referenced by');
dbms_output.put_line(LPAD('-', 10, '-'));
FOR rec IN (
SELECT c.owner chi_owner, c.table_name chi_table_name, c.constraint_name chi_constraint_name
, p.owner par_owner, p.table_name par_table_name, p.constraint_name par_constraint_name
FROM dba_constraints c,
dba_constraints p
WHERE p.owner = l_owner
AND p.table_name = l_table_name
AND p.constraint_type = 'P'
AND p.owner = c.r_owner
AND p.constraint_name = c.r_constraint_name
) LOOP
l_chi_cols := get_constraint_columns(rec.chi_owner, rec.chi_constraint_name);
l_par_cols := get_constraint_columns(rec.par_owner, rec.par_constraint_name);
dbms_output.new_line;
dbms_output.put_line('-- ' || rec.chi_owner || '.' || rec.chi_table_name || '.' || l_chi_cols || ' -> '
|| rec.par_owner || '.' || rec.par_table_name || '.' || l_par_cols);
dbms_output.put_line(
'SELECT * ' || CHR(10) ||
'FROM ' || rec.chi_owner || '.' || rec.chi_table_name || ' c' || CHR(10) ||
'WHERE (' || l_chi_cols || ') IN (' || CHR(10) ||
' SELECT ' || l_par_cols || CHR(10) ||
' FROM ' || rec.par_owner || '.' || rec.par_table_name || ' t' || CHR(10) ||
' WHERE ' || l_table_condition || CHR(10) ||
');'
);
END LOOP;
END;
/
</code></pre>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-11564543256410222922021-11-24T19:23:00.002+03:002021-11-24T19:23:33.233+03:00DB Link creation with direct insert<pre><code>insert into sys.link$ (OWNER#, NAME, CTIME, HOST, USERID, PASSWORD, FLAG, AUTHUSR, AUTHPWD, PASSWORDX, AUTHPWDX)
SELECT user_id
,'DB_LINK_NAME'
,SYSDATE
,'CONNECTION_STRING',
'TARGET_DATABASE_USERNAME',
null, '2', null, NULL
,'hashed password can be extracted from sys.link$.PASSWORDX'
, NULL
FROM dba_users
WHERE username = 'LINK_OWNER' -- DB link owner
</code></pre>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-53981263835864324712021-11-08T17:31:00.002+03:002021-11-08T17:31:25.117+03:00Async using dbms_scheduler package<p>Simple async procession in Oracle can be implemented with dbms_scheduler package</p>
<pre><code>-- dbms_scheduler.create_job makes explicit commit!!!
dbms_scheduler.create_job(
job_name => 'async_execution' || seq_async.nextval(),
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN some_asynk_proc(); END;',
start_date => sysdate,
enabled => TRUE,
auto_drop => TRUE,
comments => 'Async procedure call'
);
</code></pre>
<p>Competitive solution is implementation using dbms_aq queues, but as for me oracle’s queues from time to time start to raise unexpected exceptions. From other side for dbms_scheduler solution it is impossible to guarantee the order.</p>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-10227477050164968922021-10-29T14:46:00.002+03:002021-10-29T14:46:42.368+03:00All privileges for all users<pre><code>-- You can filter results in last lines of query
WITH all_user_roles AS (
SELECT DISTINCT connect_by_root(grantee) username, granted_role, sys_connect_by_path(granted_role, '->') PATH, admin_option
FROM dba_role_privs p
START WITH grantee IN (SELECT username FROM dba_users)
CONNECT BY PRIOR granted_role = grantee
)
--SELECT * FROM all_user_roles;
, grantee AS (
SELECT username , granted_role, PATH FROM all_user_roles
UNION
SELECT username, NULL granted_role, 'DIRECT' AS PATH FROM dba_users
)
--SELECT * FROM grantee;
, priv_list AS (
SELECT 'ROLE' priv_type, username, granted_role priv, NULL AS owner, NULL AS table_name, NULL AS column_name, admin_option grantable, PATH
FROM all_user_roles
UNION
SELECT 'SYSTEM' priv_type, username, privilege priv, NULL AS owner, NULL AS table_name, NULL AS column_name, admin_option, PATH
FROM dba_sys_privs, grantee
WHERE grantee = grantee.username
UNION
SELECT 'TABLE' priv_type, username, PRIVILEGE, owner, table_name, NULL AS column_name, grantable, PATH
FROM dba_tab_privs, grantee
WHERE grantee = grantee.username
UNION
SELECT 'COLUMN' priv_type, username, PRIVILEGE, owner, table_name, column_name, grantable, PATH
FROM dba_col_privs, grantee
WHERE grantee = grantee.username)
SELECT *
FROM priv_list
WHERE priv = 'ALTER USER'
;
</code></pre>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-67964895157984261662021-06-21T13:42:00.002+03:002021-06-21T13:42:25.151+03:00GIT housekeeping<p>Remove branches not present in remote script from stackoverflow:</p>
<pre><code>git checkout master; git remote update origin --prune; git branch -vv | Select-String -Pattern ": gone]" | % { $_.toString().Trim().Split(" ")[0]} | % {git branch -d $_}
</code></pre>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-17124811836654830952021-05-10T11:49:00.002+03:002021-05-10T11:49:10.228+03:00CONVERT for UTF-8<p>CONVERT function for clob input parameter in database with WE8MSWIN1252 character does not convert to UTF-8 - result is looking like UTF-16. Conversion to database character set works OK.<br>
Test script</p>
<pre><code>host chcp 65001
select CONVERT(CHR(246), 'AL32UTF8') from dual;
select CONVERT(TO_CLOB(CHR(246)), 'AL32UTF8') from dual;
select dump(to_char(CONVERT(TO_CLOB(CHR(246)), 'AL32UTF8'))) clob_convert, dump(CONVERT(CHR(246), 'AL32UTF8')) vc_convert
from dual;
host chcp 1252
select CONVERT(CHR(195) || CHR(182) , 'WE8MSWIN1252', 'AL32UTF8') from dual;
select CONVERT(TO_CLOB(CHR(195) || CHR(182)), 'WE8MSWIN1252', 'AL32UTF8') from dual;
</code></pre>
<h3 id="results">Results</h3>
<pre><code>SQL> host chcp 65001
Active code page: 65001
SQL> select CONVERT(CHR(246), 'AL32UTF8') from dual;
ö
SQL> select CONVERT(TO_CLOB(CHR(246)), 'AL32UTF8') from dual;
SQL> select dump(to_char(CONVERT(TO_CLOB(CHR(246)), 'AL32UTF8', 'WE8MSWIN1252'))) clob_convert, dump(CONVERT(CHR(246), 'AL32UTF8')) vc_convert
2 from dual;
Typ=1 Len=2: 0,246 <--- UTF_16????
Typ=1 Len=2: 195,182
SQL> host chcp 1252
Active code page: 1252
SQL> select CONVERT(CHR(195) || CHR(182) , 'WE8MSWIN1252', 'AL32UTF8') from dual;
ö
SQL> select CONVERT(TO_CLOB(CHR(195) || CHR(182)), 'WE8MSWIN1252', 'AL32UTF8') from dual;
ö
</code></pre>
<p>Quote from <a href="https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/CONVERT.html#GUID-C8BA0657-61C8-4964-A4CB-9292390853F6">Oracle docs</a></p>
<blockquote>
<p>Note:Oracle discourages the use of the CONVERT function in the current Oracle Database release. The return value of CONVERT has a character data type, so it should be either in the database character set or in the national character set, depending on the data type. Any dest_char_set that is not one of these two character sets is unsupported. The char argument and the source_char_set have the same requirements. Therefore, the only practical use of the function is to correct data that has been stored in a wrong character set.<br>
Values that are in neither the database nor the national character set should be processed and stored as RAW or BLOB. Procedures in the PL/SQL packages UTL_RAW and UTL_I18N—for example, UTL_RAW.CONVERT—allow limited processing of such values. Procedures accepting a RAW argument in the packages UTL_FILE, UTL_TCP, UTL_HTTP, and UTL_SMTP can be used to output the processed data.</p>
</blockquote>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-73448613505814260582021-04-08T17:49:00.003+03:002021-04-08T17:50:52.253+03:00Jenkins: print script executed by sh<p>Put inside the script.</p>
<ol>
<li>Print to trace</li>
</ol>
<pre><code>find /var/jenkins_home/jobs/My_Job/workspace@tmp/ \\
-name 'script.sh' \\
-exec cat '{}' \\;
</code></pre>
<ol start="2">
<li>Or copy whole folder</li>
</ol>
<pre><code>cp -r /var/jenkins_home/jobs/My_Job/workspace@tmp/ ~/tmp
</code></pre>
<p>Put the correct root folder in the script</p>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-39738993444168475282021-04-07T13:58:00.002+03:002021-04-07T13:58:07.281+03:00Character set conversion<p>Based on <a href="https://stackoverflow.com/questions/26390313/get-non-ascii-character-from-single-character-code">this</a></p>
<pre><code>db1>SELECT value FROM nls_database_parameters WHERE parameter='NLS_CHARACTERSET';
VALUE
----------------------------------------------------------------
WE8MSWIN1252
</code></pre>
<p>To return in the national character set of the database use UNISTR function:</p>
<pre><code>db1>SELECT dump(UNISTR('\00E4'), 1016), UNISTR('\00E4') FROM DUAL;
DUMP(UNISTR('\00E4'),1016) U
---------------------------------------- -
Typ=1 Len=2 CharacterSet=AL16UTF16: 0,e4 ä
</code></pre>
<p>To return in the database character set use TO_CHAR:</p>
<pre><code>db1>SELECT DUMP(TO_CHAR(UNISTR('\00E4')), 1016), TO_CHAR(UNISTR('\00E4')) FROM DUAL;
DUMP(TO_CHAR(UNISTR('\00E4')),1016) T
----------------------------------------- -
Typ=1 Len=1 CharacterSet=WE8MSWIN1252: e4 ä
</code></pre>
<p>If characters are not shown correctly in SQL*Plus use chcp and NLS_LANG as explained <a href="https://stackoverflow.com/questions/42019093/oracle-german-character-are-converted-to-junk-character">here</a></p>
<pre><code>db1>host chcp
Active code page: 1252
db1>host echo %NLS_LANG%
.WE8MSWIN1252
</code></pre>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-6103963323657540002021-04-02T13:07:00.002+03:002021-04-02T13:07:57.660+03:00Jenkins: fire on change of specific file in specific branch in Bitbucket<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Jenkins: fire on change of specific file in specific branch in Bitbucket</title>
<link rel="stylesheet" href="https://stackedit.io/style.css" />
</head>
<body class="stackedit">
<div class="stackedit__html"><p>Perhaps different set of plugins can execute this task, but I’ve implemented with Generic Webhook trigger.</p>
<p>Final version is for Bitbucket’s Pull request merged webhook, but it also may be adopted to Repository push Webhook</p>
<p>In the Bitbucket webhook is configured that calls Jenkins.</p>
<p>In Jenkins build trigger is Generic Webhook Trigger.<br>
<strong>Post content parameters</strong></p>
<table>
<thead>
<tr>
<th>Variable</th>
<th>Expression</th>
<th>Comment</th>
</tr>
</thead>
<tbody>
<tr>
<td>merge_commit_hash</td>
<td><code>$.pullRequest.properties.mergeCommit.id</code></td>
<td>For pull request merge</td>
</tr>
<tr>
<td>pull_request_branch</td>
<td><code>$.pullRequest.toRef.id</code></td>
<td>For pull request merge</td>
</tr>
<tr>
<td>commit_hash</td>
<td><code>$.changes[0].toHash</code></td>
<td>For repository push</td>
</tr>
<tr>
<td>push_branch</td>
<td><code>$.changes[0].refId</code></td>
<td>For repository push</td>
</tr>
</tbody>
</table><p><strong>Filter in Jenkins’ Generic Webhook Trigger</strong> to process only changes in specific branch:</p>
<table>
<thead>
<tr>
<th>Expression</th>
<th>Text</th>
</tr>
</thead>
<tbody>
<tr>
<td>refs/heads/dev</td>
<td><code>$push_branch$pull_request_branch</code></td>
</tr>
</tbody>
</table><p>Example of simple Execute shell step :</p>
<pre><code>#!/bin/bash
echo "Looking for $FILE_NAME in Commit $commit_hash $merge_commit_hash"
# list of changed files for a commit
#filelist=$(git diff-tree --no-commit-id --name-only -r $commit_hash)
#commit is merge commit - must use different tactics
filelist=$(git log -m -1 --name-only --pretty="format:" $merge_commit_hash)
if [[ $filelist == *"$FILE_NAME"* ]]; then
$ process file
else
echo "Skip"
fi
</code></pre>
</div>
</body>
</html>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-30557941923572239832021-03-24T22:57:00.001+03:002021-03-24T22:57:24.346+03:00Find all utl_ packages in source code<p>Using grep utility</p>
<pre><code>grep -rohP "(^|\W)\Kutl_\w*\." . | sort -u
</code></pre>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-12240914188676839952021-03-08T16:11:00.002+03:002021-03-08T16:11:31.506+03:00Database CI pipeline idea<ol>
<li>Prod release</li>
<li>Remote online clone of Prod database using <a href="https://oracle-base.com/articles/12c/multitenant-hot-clone-remote-pdb-or-non-cdb-12cr2">this technic.</a> See the link above to the list of restrictions, but most influenceable are: source database must be in archivelog mode and have local undo (otherwise it should be in read-only mode), same endianness in source and target. The database link should exists from target to source.</li>
<li>Keep created clone as golden source. Use <a href="https://blog.dbi-services.com/pdb-snapshot-copy-for-continuous-integration-testing/">PDB snapshot copy</a> to create databases for testing<br>
– parameter <code>clonedb=true</code> should be set<br>
– filesystem should supports sparse files (ext4, NTFS, … are ok)<br>
– as spare files are used, whole process is very fast and don’t consumes a lot of space</li>
</ol>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-73566094352404801642021-02-19T21:39:00.001+03:002021-02-19T21:39:09.450+03:00UTL_FILE_DIR AND ORA-29280<p>utl_file_dir parameter was deprecated at least in 12c and completely removed at least in 19c.</p>
<p>For UTL_FILE procedures like fopen there are 2 different ways to specify directory:</p>
<ol>
<li>Using Oracle DIRECTORY object</li>
<li>Using string with full path. This approach work if and only if the specified directory is present in the parameter UTL_FILE_DIR, otherwise it raises ORA-29280 (see example below). <strong>This way is not working in 19c as UTL_FILE_DIR does not exists anymore</strong><br>
The second approach is work regardless UTL_FILE_DIR and this is the only way in Oracle 19c</li>
</ol>
<pre><code>SELECT NAME, VALUE FROM v$parameter WHERE NAME = 'utl_file_dir';
NAME VALUE
utl_file_dir c:\temp\utl_file_dir
======================================================
Using folder name with full path.
UTL_FILE_DIR does NOT contain folder - NOT WORKING!!!
======================================================
DECLARE
v_fp sys.utl_file.file_type;
BEGIN
v_fp := sys.utl_file.fopen('c:\temp\non_utl_file_dir', 'test_file.csv', 'w', 32000);
utl_file.fclose(v_fp);
END;
/
DECLARE
v_fp sys.utl_file.file_type;
BEGIN
v_fp := sys.utl_file.fopen('c:\temp\non_utl_file_dir', 'test_file.csv', 'w', 32000);
utl_file.fclose(v_fp);
END;
ORA-29280: invalid directory path
ORA-06512: at "SYS.UTL_FILE", line 41
ORA-06512: at "SYS.UTL_FILE", line 478
ORA-06512: at line 4
======================================================
USING DIRECTORY OBJECT - WORKING
======================================================
CREATE OR REPLACE directory REPORT_DIRECTORY AS 'c:\temp\non_utl_file_dir';
Directory created
DECLARE
v_fp sys.utl_file.file_type;
BEGIN
v_fp := sys.utl_file.fopen('REPORT_DIRECTORY', 'test_file.csv', 'w',32000);
utl_file.fclose(v_fp);
END;
/
PL/SQL procedure successfully completed
======================================================
Using folder name with full path.
UTL_FILE_DIR contains folder - WORKING
======================================================
DECLARE
v_fp sys.utl_file.file_type;
BEGIN
v_fp := sys.utl_file.fopen('c:\temp\utl_file_dir', 'test_file.csv', 'w',32000);
utl_file.fclose(v_fp);
END;
/
PL/SQL procedure successfully completed
</code></pre>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-89330480740697395932021-02-17T12:17:00.001+03:002021-02-17T12:17:03.070+03:00Set ssh private key folder in GIT<p>Set variable <strong>HOME</strong> to the folder where .ssh folder with keys is created and windows GIT will pick up private key from this folder:</p>
<pre><code>set HOME=x:\
</code></pre>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-60220336856906665162021-01-18T18:35:00.001+03:002021-01-18T18:35:46.693+03:00Test connectivity on Linux<p>Using curl</p>
<pre><code>curl -v telnet://host:port
</code></pre>
<p>Using netcat</p>
<pre><code>nc -z -v host port
</code></pre>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-34095758096294003772021-01-12T16:41:00.002+03:002021-01-12T16:41:49.845+03:00cURL file send to Oracle cloud<pre><code>curl --write-out '%{time_total}' -X POST --data-binary "@5M.csv" -H "Content-Type:text/csv" --user <user>:<password> "https://....adb.us-ashburn-1.oraclecloudapps.com/ords/<user>/huge_csv/batchload?batchRows=5000&errorsMax=20"
</code></pre>
<p>Taken from <a href="https://www.thatjeffsmith.com/archive/2019/10/batch-loading-csv-to-a-table-in-oracle-autonomous-database-using-autorest-api/">here</a></p>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-26085859976636696742021-01-07T20:27:00.001+03:002021-01-07T20:28:02.412+03:00USE_SHARED_SOCKET<p>For old versions as pointed in the note Troubleshooting Connectivity Issues From a Remote Client to a Database (Doc ID 271852.1)</p>
<blockquote>
<p>For 9.2 and below, if the Operating System is Microsoft Windows NT,<br>
and you are using the dedicated server connection, a spawned Oracle<br>
thread communicates with the client through a randomly allocated port<br>
(called a REDIRECT). To force the oracle process to communicate only<br>
through the port in which Listener is listening, set the key,<br>
USE_SHARED_SOCKET = TRUE, in the Windows Registry<br>
NOTE:<br>
This is no longer the case with 10g and up as a REDIRECT does not take place for DEDICATED connections.</p>
</blockquote>
<p>Или, если верить ноте 124140.1, начиная с версии 10.2</p>
<blockquote>
<p><strong>This technique only involves the version 8.1 to 10.1 Oracle Database versions. From 10.2 onward the default changed to allow port sharing.</strong></p>
</blockquote>
<p>Причина этого</p>
<blockquote>
<p>Microsoft WINSOCK V1.1 API did not allow one process to pass a TCP socket to another process and as a result did not allow port sharing like on UNIX systems</p>
</blockquote>
<p>Знание уже абсолютно бесполезное, ибо:</p>
<ol>
<li>С версии 10.2 USE_SHARED_SOCKET=TRUE в реестре или переменной окружения</li>
<li>Уже 100 лет не видел Oracle на Windows за исключением собственного ноута</li>
</ol>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0tag:blogger.com,1999:blog-5307262417397250410.post-51221352619110490062020-12-24T13:42:00.002+03:002020-12-24T13:42:23.699+03:00Use BLOB instead of CLOB for JSON<p>Advice, I’ve met couple of times:</p>
<blockquote>
<p>Oracle performance tip: JSON is always UTF8 -Use ‘al32utf8’ character set if possible<br>
Use BLOB instead of CLOB CLOB stores data as UCS2/UTF16<br>
Use lob data api to insert/fetch data - getBytes/setBytes instead of - getBlob/setBlob!</p>
</blockquote>
Andrey Zaytsevhttp://www.blogger.com/profile/10555656949662670715noreply@blogger.com0