PHOENIX-7593: Enable CompactionScanner for flushes#2134
PHOENIX-7593: Enable CompactionScanner for flushes#2134sanjeet006py wants to merge 5 commits intoapache:masterfrom
Conversation
| // This will happen only during flushes as then we don't pass PTable object | ||
| // to determine emptyCF and emptyCQ | ||
| if (emptyCQ == EMPTY_BYTE_ARRAY) { | ||
| determineEmptyCfCq(result); |
There was a problem hiding this comment.
For not emptycfstore aren't we doing this check on every row
There was a problem hiding this comment.
Yeah, will fix that. Thanks
| // Doing MAX_COLUMN_INDEX + 1 to account for empty cells | ||
| assertEquals(TestUtil.getRawCellCount(conn, TableName.valueOf(tableName), row), | ||
| rowUpdateCounter * (MAX_COLUMN_INDEX + 1)); | ||
| // At every flush, extra cell versions should be removed. |
There was a problem hiding this comment.
We need to rename this test method to testMinorCompactionAndFlushShouldNotRetainCellsWhenMaxLookbackIsDisabled
| conn.commit(); | ||
| injectEdge.incrementValue(MAX_LOOKBACK_AGE * 1000); | ||
| TestUtil.dumpTable(conn, dataTableName); | ||
| TestUtil.flush(utility, dataTableName); |
There was a problem hiding this comment.
Let's use TestUtil.getRawCellCount and verify that extra row versions are removed.
| TestUtil.dumpTable(conn, dataTableName); | ||
| ResultSet rs = stmt.executeQuery("select * from " + dataTableName + " where id = 'a'"); | ||
| while(rs.next()) { | ||
| assertNotNull(rs.getString(3)); |
There was a problem hiding this comment.
Let's verify the column values are equal to the expected values here.
| this.emptyCF = table != null ? SchemaUtil.getEmptyColumnFamily(table) : EMPTY_BYTE_ARRAY; | ||
| this.emptyCQ = table != null ? SchemaUtil.getEmptyColumnQualifier(table) : EMPTY_BYTE_ARRAY; |
There was a problem hiding this comment.
Why not keep emptyCF and emptyCQ as null if PTable is null, so that we can also incorporate this logic?
Instead of
if (ScanUtil.isEmptyColumn(cell, emptyCF, emptyCQ)) {
index = addEmptyColumn(result, currentColumnCell, index, emptyColumn);
} else {
index = skipColumn(result, currentColumnCell, retainedCells, index);
}
this
if (emptyCF != null && emptyCQ != null && ScanUtil.isEmptyColumn(cell, emptyCF,
emptyCQ)) {
index = addEmptyColumn(result, currentColumnCell, index, emptyColumn);
} else {
index = skipColumn(result, currentColumnCell, retainedCells, index);
}
and similarly, if (emptyCQ == EMPTY_BYTE_ARRAY) too will be simple null check.
I don't think EMPTY_BYTE_ARRAY is allowed as CF:CQ, but while debugging, null check will be more readable rather than using incorrect values of emptyCF and emptyCQ for ScanUtil.isEmptyColumn?
There was a problem hiding this comment.
By keeping emptyCF and emptyCQ values as null are we trying to optimize the if check? I actually kept it empty byte array to avoid null handling and nothing will match empty byte array.
There was a problem hiding this comment.
I don't think EMPTY_BYTE_ARRAY is allowed as CF:CQ, but while debugging, null check will be more readable rather than using incorrect values of emptyCF and emptyCQ for ScanUtil.isEmptyColumn?
Got it, will change to storing null values. I agree this improves readability. Thanks
|
Not related to this PR, but as a general improvement, this method should not be named as We should remove the above utility because HBase CellUtil already provides exactly the same: (worth doing as separate Jira/PR though) |
Created JIRA: https://issues.apache.org/jira/browse/PHOENIX-7597 |
| public InternalScanner preFlush(ObserverContext<RegionCoprocessorEnvironment> c, Store store, | ||
| InternalScanner scanner, FlushLifeCycleTracker tracker) | ||
| throws IOException { | ||
| if (!isPhoenixTableTTLEnabled(c.getEnvironment().getConfiguration())) { |
There was a problem hiding this comment.
We are using the same config to control both flushing and compaction. Will there be a scenario where we want to disable Phoenix compaction on flushing but still continue to use Phoenix compaction for major/minor compaction ?
There was a problem hiding this comment.
Will there be a scenario where we want to disable Phoenix compaction on flushing but still continue to use Phoenix compaction for major/minor compaction ?
I think in general it will be good to have this flexibility. Shall I introduce a new config to enable disable preFlush hook separately?
There was a problem hiding this comment.
Yeah I think it will be better because Phoenix compaction has already been running in production and this is a new feature so having this flexibility will be helpful. We can enable it by default.
There was a problem hiding this comment.
This flexibility will be helpful. I think we can go ahead with merging the changes after config is added and perf can be done later. The config will anyways be helpful to turn off the feature, WDYT @tkhurana?
There was a problem hiding this comment.
If we are fine with waiting for a day or 2 more then I can post the perf results. For single column family its done. I am currently, doing perf analysis of multi-CF. Thanks
There was a problem hiding this comment.
@virajjasani perf analysis for multi-CF will take some time as I got to know that HBase flushTime metrics for multi-CF case sometimes could end up tracking combined time for flushing multiple CFs and other times only 1 CF. So, need to directly track time taken by StoreFlusher.
One idea is we wait for perf analysis before merging this PR but 5.3 release can go on or if we want this PR in 5.3 then for now we can disable preFlush hook via config and only enable it after perf analysis. I am bit inclined towards first approach. WDYT @virajjasani @tkhurana?
There was a problem hiding this comment.
@sanjeet006py Instead of relying on metrics you could also use the log messages when flush happens. For example 2025-05-07 16:47:34,580 INFO [MemStoreFlusher.1] regionserver.HRegion - Finished flush of dataSize ~255.34 MB/267746710, heapSize ~256.01 MB/268445696, currentSize=10.04 MB/10529365 for 397f5412f43294d01081c54d7253d378 in 1986ms, sequenceid=207416387, compaction requested=false" Does that have the same problem too ?
There was a problem hiding this comment.
Also, we don't need absolute numbers but just a comparison to make sure nothing is regressed.
There was a problem hiding this comment.
Does that have the same problem too ?
Yes as this log line only tells us how much time all the stores which were to be flushed took to flush data as total. I added a log line in StoreFlusher and will test with that now.
There was a problem hiding this comment.
Added the config to disable CompactionScanner for flushes.
|
@sanjeet006py Can you also do a perf study to rule out any performance degradation that can get introduced in the flushing path. We have some metrics at the regionserver like |
|
@tkhurana @virajjasani the perf analysis is done: https://docs.google.com/document/d/1oQzEMP4LXOFxLHlKt1SZ5uvRLd3Vk90x39gn1hVBn0Y/edit?tab=t.0#heading=h.32xuccojgowv. Overall I see enabling CompactionScanner for flushes will have some overhead (as expected) but no big enough to cause performance degradation. Thanks |
No description provided.