Quantcast
Channel: Media Foundation Development for Windows Desktop forum
Viewing all 1079 articles
Browse latest View live

DXVA in virtual graphics driver on XP

$
0
0

I'm developing a virtual graphics driver with DXVA for redirecting the video from virtual machine to a remote PC which has intel HD4000 graphics card. But some problems are blocking me. I use DXVA Checker to test my virtual graphics driver. 

I follow the MSDN and add almost all of the functions for DXVA. 
These functions will be called when my video driver is being loaded: 
1)DrvGetDirectDrawInfo  (twice)
2)DrvEnableDirectDraw (int DDCORECAPS, i just set caps which name contain OVERLAY, HARDWARE, CODEC, BOB, FOURCC, YUV, COLORKEY)
3)DdGetDriverInfo  (many times, only provide GUID_MotionCompCallbacks GUID_NTPrivateDriverCaps GUID_DDMoreSurfaceCaps GUID_GetHeapAlignment in my driver)
4)( repeat step 1 to 3)
5)DrvDisableDirectDraw (I'm not sure why DirectDraw is disabled after step 1 to 4)
I'm not sure whether all of the capabilities of driver in DDCORECAPS would work. 

DXVA Checker call DirectDraw functions in driver as the following steps: 
1)DdGetDriverInfo
2)DdCanCreateSurface
3)DdCreateSurface  (create primary surface)
4)DdMoCompGetGuids
5)DdMoCompCreate (problem: only called with DXVA_DeinterlaceContainerDevice in my driver) 
6)DdCreateSurface (create overlay surface)
7)DdMapMemory
8)DdLock (only return DD_OK and DDHAL_DRIVER_HANDLED in my driver, i don't know whether )
9)DdMoCompGetFormats get all color format the video driver supports (in my driver, return YUY2, NV12, YV12) 
10)DdMoCompGetBuffInfo (problem: never called in my video driver) 
11)DdMoCompRender (problem: only called for COPP only in my video driver) 
12)DdMoCompQueryStatus (never called) 
13)DdMoCompBeginFrame (never called) 
14)DdMoCompEndFrame (nerver called) 
15)DdMoCompDestroy (called) 

The MSDN of DXVA nearly make me crazy. 
Could anyone help me make these functions work? 
1)DdMoCompGetBuffInfo (which capability will make this function work?) 
2)DdMoCompQueryStatus 
3)DdMoCompBeginFrame 

Thanks


E_ACCESSDENIED Error with Video File open with IMFSourceReader or IMFSourceResolver

$
0
0

I am trying to open video files using IMFSourceReader->MFCreateSourceReaderFromURL() orIMFSourceResolver->CreateObjectFromURL(). But both of them are returningE_ACCESSDENIED General Access denied error.

I am in windows 8.1 desktop and the video is in Pictures Library folder. I have the Capability for Pictures Library checked inPackage.appxmanifest page. But still I am getting access denied error.I am running the code in WinRT and I can open file using StorageFile->OpenAsync() to get the IRandomAccessStream. But Source Reader is not working.

any idea what is causing it and how I can get rid of this problem?


Updating an ASF header via IMFASFContentInfo::SetProfile only works with one stream ?

$
0
0

Hello,

i am altering the ASF Header on a Media Sink before it gets written to the file.

This works very well as long as i only have one stream on the IMFASFProfile, with multiple streams it shows some odd behavior. When i have more than one stream on the profile and i change the stream config from one of the streams, then in the final file the first stream is deleted.

I went exactly after the documentation and payed very close attention to the clones of objects like the IMFASFStreamConfig or IMFASFProfile. The Problem lays in IMFASFContentInfo::SetProfile i guess, because when i do not set the updated profile the streams and its headers appear normal in file, but of course with no changes. The MSDN documentary states that on SetProfile it replaces the old one, but from my experience it only does when you have one stream in the profile.

After thousands of tests over the past 3 days i still cant figure out what SetProfile is doing internaly but i must be very close to it. I know my updated profile is correct, because i printed the media type attributes of every stream after i made some changes.

One thing that realy confuses me is that if you dont clone the profile or the stream config and directly use the pointer to the object inside of the content info and you call IMFASFProfile::RemoveStream on all streams, then in the final file the streams and its headers are still present. Like the documentary says on IMFASFProfile::CreateStream :

-

The ASF stream configuration object created by this method is not included in the profile. To include the stream, you must first configure the stream configuration and then call IMFASFProfile::SetStream.

-

That means even if you remove a stream the ContentInfo still has its stream config objects set. Well...theres no function exposed to delete/remove a stream config on a profile, you only have IMFASFProfile::SetStream. To quote the documentary on this function :

-

If the stream number in the ASF stream configuration object is already included in the profile, the information in the new object replaces the old one. If the profile does not contain a stream for the stream number, the ASF stream configuration object is added as a new stream.

-

Well this works normal as expected, i know that because i printed the whole profile after i set an updated config with this function. As i stated the problem is IMFASFContentInfo::SetProfile. Somehow Stream andStream Config is not the same inside the content info, and there must be some backup pointers of the stream configs.

Current behavior of IMFASFContentInfo::SetProfile :

With only one stream in a profile the function replaces the existent profile with the updated. With multiple streams on a profile the function not only deletes the first stream, it also uses internal backup pointers so that no changes are made even if you alter the second stream and let the first untouched ( second stream still has the old values in the header then ).

I would realy like hear from the MS team what SetProfile is doing and how to fix this.

regards

co0Kie






Updating the header on the ASF Media Sink via IMFASFContentInfo::SetProfile only works with one stream ?

$
0
0

Hello,

i am altering the header on the ASF Media Sink before it gets written to the file.

This works very well with one stream on the IMFASFProfile, but with multiple streams it shows some odd behavior. When i have more than one stream on the profile then a call to IMFASFContentInfo::SetProfile will fail and the first stream is deleted from the final file.

I went exactly after the documentation and payed very close attention to the clones of objects like the IMFASFStreamConfig or IMFASFProfile. I know my updated profile is correct, because i printed the whole thing after i made some changes to the streams.

The problem must be inside IMFASFContentInfo::SetProfile, because when i do not set the updated profile the streams and its headers appear normal in file, but of course with no changes. The MSDN documentary states that on SetProfile it replaces the old profile with the new one, but from my experience it only does when you have one stream in the profile.

Current behavior of IMFASFContentInfo::SetProfile :

With only one stream in a profile the function replaces the existent profile with the new one. With multiple streams on a profile the function always fails ( even if the profile is the original one with no changes ) and deletes the first stream from the final file.

I would realy like to hear from the MS team what attributes SetProfile is checking on the profile and how to get it to work with multiple streams.

regards

co0Kie













Help: Custom WPF Video Player using WPF D3DImage, Custom EVR

$
0
0

Greetings -

Due to some special requirements, I'm implementing a custom media player that needs to be consumable by WPF. I'm using Media Foundation with a custom EVR, and WPF's D3DImage control. (I originally had MF output directly to an HwndHost child but the resulting window will not layer properly beneath other WPF elements, which is likewise essential). In effect, I'm making a custom MediaElement from scratch (any naysayers can stop reading ;)).

I started with the EVRPresenter and MFPlayer samples from the SDK, largely unchanged. The one important difference is that in the EVR presenter, rather than present the IDirect3DSurface9 to the hwnd provided when the MF video renderer is created, I instead post an application message to the HWND, containing the pointer to the IDirect3DSurface9 (yes, I call AddRef()). The HWND (which is a dummy window per the "WPF and Direct3D9 Interoperation article") then handles the message by updating the D3DImage with the provided surface, and then releases the surface. This occurs for each video frame.

Short story, it works, BUT, performance is poor. My assumption is that the posting / handling of the message, wherein I transfer the IDirect3DSurface9 pointer from the EVR DLL to the WPF application, is the bottleneck, and that posting messages to the dummy window is not the most efficient way to transfer the IDirect3DSurface9 pointer to the WPF app.

Here's the question - how best can I transfer the IDirect3DSurface9 pointer from the EVR to the WPF app, on time, and signal the WPF app it's time to update the D3DImage? The EVR, being entirely unmanaged code, clearly cannot talk directly to my WPF windows. Posting an app message to the dummy hwnd seems like the best option. But since performance is unacceptable, I either need another, or I need assurance that message posting is actually not the bottleneck.

Any suggestions? Thanks so much.

Peter



How can use MFCreateSourceReaderFromURL to access video file in metro?

$
0
0

Hello,

I use the MFCreateSourceReaderFromURL to load video file and shows thumbnails in my metro project. If i add a video file to the "Assets" dictionary in my project, it's work. But if remove the video from project or select a another file by FileOpenPicker, it shows "E_ACCESSDENIED General access denied error.", the code as bellow.

 // Create the source reader from the URL.

    if (SUCCEEDED(hr))
    {
        hr = MFCreateSourceReaderFromURL(wszFileName, pAttributes, &m_pReader);

        //wszFileName = C:\Users\xxx\Videos\test.mp4

    }

So whether the win32 API can't access the file exclude the project in metro or i make a mistake? I appreciate if someone can tell me the solution.



DX11 Video Renderer sample code is incomplete.

$
0
0
Your ref : http://code.msdn.microsoft.com/windowsdesktop/DirectX-11-Video-Renderer-0e749100

Dear Sirs,

    So ... that project creates a DLL, which isn't a standard EVR so I have no clue how to fire it up.

Feeback states that the only way to test it is to use a closed source version of topedit in the Windows 8 SDK.

That isn't very helpful as that doesn't demonstrate how to use the thing.

Please provide sample code - the simpler the better that demonstrates how to use this DirectX11 Video Renderer or a derivative to throw a video texture on some simple geometry on the screen.

As a follow up, please demonstrate multiple video textures playing simultaneously to demonstrate the API supports this feature. If it doesn't, please add this support :)

Sorry to give you a hard time but I need a solid video API and if Windows doesn't provide one, it's time to look to other operating systems for a robust solution.

Regards,
Steve.

IMFSinkWriter bogs down after about 10800 frames

$
0
0

Hiya folks,

I'm creating audio and video and sending it to an IMFSinkWriter to output as an H.264/AAC mp4 movie.  I've noticed that after about10800 frames of 640 x 480 resolution video with audio the sink writer basically stops responding.. it becomes so slow that the program is unusable and eventually it crashes.  

I thought I fixed the issue by calling IMFSinkWriter->Flush() about every second.. this kept the performance peppy, but then I noticed that the resulting movie was skipping.  I wasn't too surprised as it is documented in the Flush() API that when calling Flush, all pending samples are dropped.  Ok, fine.

So then I tried calling IMFSinkWriter->NotifyEndOfSegment() about every second.  This also kept the performance peppy and didn't seem to drop frames.  However, I noticed that it was causing my audio track and video track to get out of sink... specifically, after awhile, the video starts advancing faster than the audio (picture a slideshow, here, where the audio track is correct but the pictures are jumping ahead of their audio).  It's like the duration of the video frames is messed up.

I thought perhaps that somehow queued samples were still being dropped so I tried using IMSinkWriter->GetStatistics() to watch for pending samples and wait for them before proceeding, but this hasn't fixed my problem.  I'm left with two choices: call Notify() and screw up my synchronization, or don't call Notify() and bog down the sink writer.  Either choice is unacceptable.. I must be doing something wrong.  Can someone please point me in the right direction?  I must be missing something basic and significant in my use of the IMFSinkWriter.


How can I create an MP4 container from an h.264 byte stream (Annex B)?

$
0
0

Basically, I have a neat H.264 byte stream in the form of I and P samples. I can play these samples using MediaStreamSource and MediaElement and they play good. I also need to save them as an MP4 file so that the same can be played later using Media Element or VLC. This is how I am trying to do it, using Media Foundation;

I create an IMFMediaSink from MFCreateMPEG4MediaSink; this is my code:

IMFMediaType *pMediaType = NULL;
	IMFByteStream *pByteStream = NULL;
	HRESULT hr = S_OK;
	if (SUCCEEDED(hr))
	{
		hr = MFCreateMediaType(&pMediaType);
	}

	pSeqHdr = reinterpret_cast<UINT8 *>(mSamplesQueue.SequenceHeader());
	if (SUCCEEDED(hr))
	{
		hr = pMediaType->SetBlob(MF_MT_MPEG_SEQUENCE_HEADER, pSeqHdr, 35);
	}
	UINT32 pcbBlobSize = {0};
	hr = pMediaType->GetBlobSize(MF_MT_MPEG_SEQUENCE_HEADER, &pcbBlobSize);

	/*if (SUCCEEDED(hr))
	{
		hr = pMediaType->SetUINT32(MF_MPEG4SINK_SPSPPS_PASSTHROUGH, TRUE);
	}*/
	if (SUCCEEDED(hr))
	{
		hr = pMediaType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video);
	}
	if (SUCCEEDED(hr))
	{
		hr = pMediaType->SetGUID(MF_MT_SUBTYPE, VIDEO_INPUT_FORMAT);
	}
	if (SUCCEEDED(hr))
	{
		hr = MFSetAttributeRatio(pMediaType, MF_MT_FRAME_RATE, VIDEO_FPS, 1);
	}
	if (SUCCEEDED(hr))
	{
		hr = pMediaType->SetUINT32(MF_MT_AVG_BITRATE, VIDEO_BIT_RATE);
	}
	if (SUCCEEDED(hr))
	{
		hr = pMediaType->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive);
	}
	if (SUCCEEDED(hr))
	{
		hr = MFSetAttributeSize(pMediaType, MF_MT_FRAME_SIZE, VIDEO_WIDTH, VIDEO_HEIGHT);
	}
	if (SUCCEEDED(hr))
	{
		// Pixel aspect ratio
		hr = MFSetAttributeRatio(pMediaType, MF_MT_PIXEL_ASPECT_RATIO, 1, 1);
	}
	if (SUCCEEDED(hr))
	{
		hr = MFCreateFile(
			MF_ACCESSMODE_READWRITE,
			MF_OPENMODE_DELETE_IF_EXIST,
			MF_FILEFLAGS_NONE,
			L"output1.mp4",&pByteStream);
	}
	if (SUCCEEDED(hr))
	{
		hr = MFCreateMPEG4MediaSink(
			pByteStream,
			pMediaType,
			NULL,&pMediaSink);
	}

Then I create an IMFSinkWriter from this media sink using MFCreateSinkWriterFromMediaSink; this is my code:

if (SUCCEEDED(hr))
	{
		hr = MFCreateSinkWriterFromMediaSink(pMediaSink, NULL, &pSinkWriter);
	}
	// Tell the sink writer to start accepting data.
	if (SUCCEEDED(hr))
	{
		hr = pSinkWriter->BeginWriting();
	}

	if (SUCCEEDED(hr))
	{
		pSinkWriter->AddRef();
	}

And then I write every sample to the sink writer with IMFSinkWriter::WriteSample(0, IMFSample); this is my code:

IMFSample *pSample = NULL;
	IMFMediaBuffer *pBuffer = NULL;

	const DWORD cbBuffer = mSamplesQueue.GetNextSampleSize();
	UINT32 isIDR = mSamplesQueue.GetNextSampleIsIDR();
	BYTE *pData = NULL;

	// Create a new memory buffer.
	HRESULT hr = MFCreateMemoryBuffer(cbBuffer, &pBuffer);

	// Lock the buffer and copy the video frame to the buffer.
DWORD pcbMaxLen = 0, pcbCurLen = 0; if (SUCCEEDED(hr)) { hr = pBuffer->Lock(&pData, &pcbMaxLen, &pcbCurLen); } if (SUCCEEDED(hr)) { hr = mSamplesQueue.Dequeu(&pData); } if (pBuffer) { pBuffer->Unlock(); } // Set the data length of the buffer. if (SUCCEEDED(hr)) { hr = pBuffer->SetCurrentLength(cbBuffer); } // Create a media sample and add the buffer to the sample. if (SUCCEEDED(hr)) { hr = MFCreateSample(&pSample); } if (SUCCEEDED(hr)) { hr = pSample->AddBuffer(pBuffer); } // Set the time stamp and the duration. if (SUCCEEDED(hr)) { hr = pSample->SetSampleTime(rtStart); } if (SUCCEEDED(hr)) { hr = pSample->SetSampleDuration(rtDuration); } if (SUCCEEDED(hr)) { hr = pSample->SetUINT32(MFSampleExtension_CleanPoint, isIDR); } //pSample-> // Send the sample to the Sink Writer. if (SUCCEEDED(hr)) { hr = pSinkWriter->WriteSample(0, pSample); } SafeRelease(&pSample); SafeRelease(&pBuffer);

The writing of samples is an iterative code that is called from every sample that I have (I am testing with 1k I and P samples). Now when I call the IMFSinkWriter::Finalize(), it tells me that "0xc00d4a45 : Sink could not create valid output file because required headers were not provided to the sink.". It does create an MP4 file with a very valid size (for my 1k samples, 4.6 MB).  This is thelink to the trace from MFTrace.

If it is asking for MF_MT_MPEG_SEQUENCE_HEADER then I am setting them with IMFMediaType::SetBlob(MF_MT_MPEG_SEQUENCE_HEADER, BYTE[], UINT32)

I checked the file with Elecard Video Format Analyzer and the header seems incomplete. 

Could I get some help finding out what I am missing or whether there is some better/other way of doing what I am trying to achieve?

Thanks!


Thanks, Manish






MFCopy: One more alarming bug in Windows 8/8.1

$
0
0

One more Windows 8/8.1  bug. This is alarming. Basic functionality broken.

Basically, MFCopy "trim" does not work in Windows 8/8.1. Works fine on Windows 7.

Steps to reproduce:

MFCopy  -s 20000 -d 60000 input.mp4 out.mp4

Please see the output files (out_win8.mp4, out_win7.mp4) here:

https://drive.google.com/file/d/0Bxyb9Iftjh4DX1FsVENJWWt4Q3M/view?usp=sharing

The duration of out_win8.mp4 is 1:19 (incorrect) whereas the duration of out_win7 is 1:00 (correct)

MS developers, Please check this.

Why is IAudioCaptureClient::GetBuffer returning packets with 448 frames on 44100 Hz sampling rate ?

$
0
0

Hi,

after a lot and hard work my app, which i am already working on for 4 years, is finished now.

I am encoding 3 custom live sources at once with the sink writer ( 2 audio and one video ) in my program using the IAudioCaptureClient to collect the audio data.

I was able to synchronize my 2 audio sources with the video source ( video has an unknown incoming sample rate which can vary hefty since it comes from textures ) on the exact nanosecond. Well for every sample rate above 44100 its easy because the frame count in each packet coming from the buffer is related to the sampling rate, but with 44100 its not and i would like to know why. My buffers are set to one second size each with IAudioClient::Initialize, so they can hold the sampling rate.

The following table shows you the numFramesToRead in each packet from GetBuffer for all the sample rates starting from 44100 :

Sample Rate          |          Frames Per Packet

    44100                                    448

    48000                                    480

    96000                                    960

  192000                                  1920

As you can see the number of frames per packet on 44100 is not related to the sample rate like with all the others.

For both audio sources i am looking every 500 milliseconds for packets with GetNextPacketSize and then drain the buffer with a while loop. As i stated my 3 sources are live and the user can press start and stop to record. On start all sources start generating data at the exact same nanosecond and on stop i am draining the audio buffer until the audio stream length is equal to the video stream length.

On all sample rates above 44100 the audio duration always ends with00000 on the last 5 Digits. I give you an example.

After pressing stop the stream length for audio and video is something like this ( durations in 100 nanosecond units ) :

Audio stream length = 279600000

Video stream length = 284680000

Then i drain the audio buffer until its

Audio stream length = 284680000

Video stream length = 284680000

This works for all sample rates except 44100. On 44 KHz it Looks something like this when stop is pressed :

Audio stream length = 84317205

Video stream length = 87650000

Then i drain the buffer to nearest possible value ( in this example i rounded up, could also round down so that the audio length is smaller ) so its

Audio stream length = 87650075

Video stream length = 87650000

I mean you cant see that few nanoseconds with you eyes on screen and when you watch the video its sync but i am a perfectionist and want it to be even. It doesnt matter if i use a time source with only milliseconds ( like in the examples above ) or a high precision one with nanoseconds for my video frame timing. I tested it out by using MFCreateSystemTimeSource and timed the video frames as they should be, but it also makes no difference.

Now the question is why we get 448 frames per packet on 44100 instead of 441 ? Is the actual sample rate 44800 ? When i press stop the audio duration is always NOT divideable by the frame duration for that sample rate, and that means there are half/unfinished frames stored.

I might find it out next few days but i would appreciate any help. Maybe someone can shed a light on this.

regards

co0Kie





Problem with synchronizing live video and audio on 44100 Hz sample rate due to packet size from IAudioCaptureClient::GetBuffer

$
0
0

Hi,

i am encoding 3 custom live sources at once with the sink writer using the IAudioCaptureClient to collect the audio data.

I was able to synchronize my 2 audio sources with the video source on the exact nanosecond. Well for every sample rate above 44100 its easy because the frame count in each packet coming from the buffer is related to the sample rate, but with 44100 its not and i would like to know why. My buffers are set to one second size each with IAudioClient::Initialize, so they can hold the sampling rate.

The following table shows you the numFramesToRead in each packet from GetBuffer for all the sample rates starting from 44100 :

-

Sample Rate          |          Frames Per Packet

     44100                                     448

     48000                                     480

     96000                                     960

   192000                                   1920

-

As you can see the number of frames per packet on 44100 is not related to the sample rate like with all the others.

For both audio sources i am looking every 500 milliseconds for packets with GetNextPacketSize and then drain the buffer with a while loop. As i stated my 3 sources are live and the user can press start and stop to record. On start all sources start generating data at the exact same nanosecond and on stop i am draining the audio buffer until the audio stream length is equal to the video stream length.

On all sample rates above 44100 the audio duration always ends with00000 on the last 5 Digits. I give you an example. After pressing stop the stream length for audio and video is something like this ( durations in 100 nanosecond units ) :

-

Audio = 279600000

Video = 284680000

Then i drain the audio buffer until its

Audio = 284680000

Video = 284680000

This works for all sample rates except 44100. On 44 KHz it looks like this when stop is pressed :

-

Audio stream length = 84317205

Video stream length = 87650000

Then i drain the buffer to nearest possible value ( in this example i rounded up, could also round down so that the audio length is smaller ) so its

Audio stream length = 87650075

Video stream length = 87650000

-

I mean you cant see that few nanoseconds with you eyes on screen and when you watch the video its sync but i am a perfectionist and want it to be even. It doesnt matter if i use a time source with only milliseconds ( like in the examples above ) or a high precision one with nanoseconds for my video frame timing. I tested it out by using MFCreateSystemTimeSource and timed the video frames as they should be, but it also makes no difference.

Now the question is why we get 448 frames per packet on 44100 Hz instead of 441 from IAudioCaptureClient::GetBuffer ? Is the actual sample rate 44800 ? When i press stop the audio duration is always NOT divideable by the frame duration for that sample rate, and that means there are half/unfinished frames stored.

I might find it out next few days but i would appreciate any help. Maybe someone can shed a light on this.

regards

co0Kie










How to issue KSPROPERTYSETID_ExtendedCameraControl ?

$
0
0

Hi,

Does KSPROPERTYSETID_ExtendedCameraControl could be issued by IKsControl::KsProperty ? why my code failed (hr== E_INVALIDARG), actually the handler in driver is not called, looks like the failed in user mode. here spKsControl is a CComPtr<IKsControl> instance, and spKsControl ->KsProperty  works fine for PROPSETID_VIDCAP_CAMERACONTROL. 

Environment Windows 8.1

Code build on Vistual Studio 2013, platform tool set  Visual Studio 2013 (v120)in project setting.

	KSPROPERTY KsProperty = { KSPROPERTYSETID_ExtendedCameraControl, KSPROPERTY_CAMERACONTROL_EXTENDED_PHOTOMODE, KSPROPERTY_TYPE_SET  };
	BYTE buf[sizeof(KsProperty)+sizeof(KSCAMERA_EXTENDEDPROP_HEADER)+sizeof(KSCAMERA_EXTENDEDPROP_PHOTOMODE)] = { 0 };
	KSPROPERTY* pKsProperty = (KSPROPERTY*)buf;
	*pKsProperty = KsProperty;
	KSCAMERA_EXTENDEDPROP_HEADER *pKsCameraExtendedPropHeader = (KSCAMERA_EXTENDEDPROP_HEADER*)(pKsProperty+1);
	KSCAMERA_EXTENDEDPROP_PHOTOMODE *pKsCameraExtendedPropPhotomode = (KSCAMERA_EXTENDEDPROP_PHOTOMODE*)(KsCameraExtendedPropHeader + 1);

	pKsCameraExtendedPropHeader->Version = 1;
	pKsCameraExtendedPropHeader->PinId = KSCAMERA_EXTENDEDPROP_FILTERSCOPE;
	pKsCameraExtendedPropHeader->Size = sizeof(KSCAMERA_EXTENDEDPROP_HEADER)+sizeof(KSCAMERA_EXTENDEDPROP_PHOTOMODE);
	pKsCameraExtendedPropHeader->Capability = KSCAMERA_EXTENDEDPROP_CAPS_ASYNCCONTROL;
	pKsCameraExtendedPropHeader->Flags = KSCAMERA_EXTENDEDPROP_PHOTOMODE_NORMAL;

	DWORD junk;
	hr = spKsControl->KsProperty(pKsProperty,
		sizeof(KSPROPERTY),
		KsCameraExtendedPropHeader,
		KsCameraExtendedPropHeader->Size,&junk);
	if (FAILED(hr))
	{
		return hr;
	}

Thanks,

How to actually use MF_SINK_WRITER_D3D_MANAGER ?

$
0
0

Hi,

i got a few questions regarding the sink writer.

Since the sink writer is not using a topology inside and can only have input and output for one MFT for each stream, the question arises for what MF_SINK_WRITER_D3D_MANAGER actually is for.

I know that in a topology you can decode from file or memory with a hardware decoder ( then maybe put a transform node with a hardware video processor in between ) and then connect to a hardware encoder with the upstream downstream model.

I have a direct3d source and would need to implement it as a custom source if i want to use it in a topology. I already started implementing, but in between i am searching for side paths. At the moment i am successfully encoding direct3d surfaces with the sink writer. Well lets say i encode staging textures, because thats all the sink writer accepts when it comes to direct3d. I assume the sink writer is locking and copying the data out of the texture when i pass it to WriteSample. I think the sink writer is able to take DEFAULT textures and encode them directly, but in order to do that you need to connect a hardware upstream from a decoder on the same device or set the device manager.

The documentary says exactly the same on MF_READWRITE_D3D_OPTIONAL. So if you use a hardware encoder which gives you back a positive MF_SA_D3D_AWARE or MF_SA_D3D11_AWARE then you can set the IDirect3DDeviceManager9 or IMFDXGIDeviceManager on the sink writer with MF_SINK_WRITER_D3D_MANAGER. The MFCreateSinkWriterFromURL function is then saving the pointer internally and later on AddStream the MFT_MESSAGE_SET_D3D_MANAGER message is called to set the device manger on the transform.

-

In my case i am using the AMD H264 Encoder MFT which gives back D3D_AWARE for direct3d 9 and 11 ( D3D11_AWARE ). So in theory it should be possible to feed the encoder with direct3d surfaces but...what would be the correct way using the sink writer ? Everything succeeds in my current attempt but when i start encoding the app freezes and the only way then is the Task Manager and shutdown ( note its not marked as inactive, its just not continuing ). My guess is that internally the enoder calls LockDevice on the device manager and then fails somewhere in mid code and never Unlock the device again so the app freezes.

The MSDN documentary states that you would create a device either for direct3d rendering purpose or video processing ( with flags etc ) but not both. My device is a rendering device and i have my backbuffer which i want to encode. At the moment my encoding works but is very inefficient as it goes this way:

backbuffer -> staging texture -> sample/data -> GPU encoder -> sink writer/ disk 

    GPU        ->           CPU         ->         CPU        ->        GPU        ->          CPU

Thats all unecessary because technically you could do ;

backbuffer -> GPU encoder -> sink writer / disk

    GPU        ->        GPU        ->          CPU

-

I would like to know for what the MF_SINK_WRITER_D3D_MANAGER Attribute is for. Is it possible to connect an upstream MFT to the sink writer, and if so how would that work since WriteSample wouldnt work then. I hope someone can help me to set up the sink writer correctly. If i find it out by myself i will update this thread and post my answer.

Regards

co0Kie






Microsoft MPEG Video Decoder MFT

$
0
0

Hello

I'm trying to create the MPEG decoder, both from TopoEdit and from code.

In both cases I get and error C004F011.

My Operating system is Win 8.1

Any Ideas?

Thanks


Microsoft AAC Encoder to result WAVE_FORMAT_RAW_AAC1 payload

Video Capture on recent Windows 8.1 Tablets shows very dark video

$
0
0

Hello,

I switched from direct show to media foundation to capture video from webcams. It is a desktop application and works well with both direct show and media foundation on Windows 7 and Windows 8.1 desktop computers for a lot of different webcams.

Trying the same application on a Windows 8.1 Atom based tablet, the video is very dark and green.

I tried it on the following tablets (all of them show the above described behavior):

-Acer T100A (camera sensor MT9M114, Atom 3740)

-Dell Venue Pro 11 (camera sensor OV2722 front, IMX175 back - Atom 3770)

-HP Omni 10 5600 (camera sensor OV2722, IMX175 - Atom 3770)

I capture using IMFMediaSession, building a simple topology with a media source and the EVR.

  • TopoEdit shows the same strange behavior
  • MFTrace does not show any errors (at least I do not see any errors)
  • In case an external usb camera is used on all these tablets, the video is fine.
  • The SDK Sample MFCapture3d3 works fine, it uses the source reader for capturing - I verified the media type of the source used there, it is the same I use in my application (same stream descriptor, same media type, verified with mftrace)
  • The "CaptureEngine" video capture sample from the SDK also works as expected, however, I need Windows 7 compatibility and would like to use the same source on both platforms
  • When using direct show, all the above mentioned tablets show only a fraction of the sensor image when capturing with lower resolutions (e.g. 640x360), the colors of the video are fine. I tried it with the desktop app of Skype and GraphEdit, same behavior (only a fraction of the video is shown, colors are fine) - Skype for destkop apparently uses a DirectShow source filter.

Has anyone tried capturing the camera of an Atom z3700 series tablet with media foundation using the media session? If so, is special handling of the media source required on these tablets?

If required, I will post some code or mftrace logs.

Thanks a lot,

Karl


Bluemlinger





When is it safe to release IMFSample and data?

$
0
0

According to the tutorial here: http://msdn.microsoft.com/en-us/library/windows/desktop/ff819477(v=vs.85).aspx, it looks like we are supposed to release the sample and sample buffer right away:

    // Send the sample to the Sink Writer.
    if (SUCCEEDED(hr))
    {
        hr = pWriter->WriteSample(streamIndex, pSample);
    }

    SafeRelease(&pSample);
    SafeRelease(&pBuffer);
But sometimes I get heap corruption errors when doing this.. if I omit the calls to SafeRelease the sample and buffer, I leak memory, but I don't get heap corruption errors.  From what I understand, the IMFSinkWriter queues up the samples sent to it and writes them in its own good time.. so, it does make sense that heap corruption happens when I release the sample before the sink writer gets a chance to write it.  Is this what is happening?  If so, how should I clean up the memory responsibly?  Do I need to put an asynchronous call back on the sink writer and use that to monitor when samples are finished so I can delete the memory?  That seems cumbersome, and I've never seen that done in any microsoft examples, so I'm sort of wondering what to do.

WavSink Access Violation

$
0
0

I am attempting to walk through WavSink example source code and I am encountering an access violation (First-chance exception at 0x66AC5838 (mfplat.dll) in WriteWavFile.exe: 0xC0000005: Access violation reading location 0xFEEEFEF2) after MESessionStart case is hit on line 180 (hr = pSession->GetEvent(0, &pEvent);) of main.cpp. Has anyone encounter this before? If so how did you resolve this issue. I am attempting to convert a .mp3 file to a .wav file. Just to make sure that I have the latest version of the WavSink Sample can you guys point me to the link that contains the latest version of the source code? This MSDN Code Gallery from the WavSink Sample Page is not working luckly I grab a version of the code couple months ago.

 MSDN Code Gallery from the WavSink Sample Page Broken Link:

http://msdn.microsoft.com/en-us/library/windows/desktop/bb970469(v=vs.85).aspx

Speeding up Video reading when using SourceReader

$
0
0

Hi,

I am using the SourceReader in my application to decode video. My application does not render the decoded video to the monitor but serves to read in the video data as a matrix which users are further manipulate. I am using the SourceReader in synchronous mode and I am able to process frames in a HD video at about 17 fps. This is slow because Media Player is able to play this file comfortably at 30fps.

I am looking at how to improve the performance. Looking at the documentation, there appear to be two possible ways I can do this:

1. Use the SourceReader in asynchronous mode

2. Make use of hardware acceleration (DXVA) using the MF_SOURCE_READER_DISABLE_DXVA and MF_SOURCE_READER_D3D_MANAGER attributes of the Source Reader

I am interested in Option2. This appears to require a Direct3D Device Manager and a Direct3D device. Also, the documentation states that this is recommended when decoding and rendering to a screen. 

As I do not want video to be rendered to the display, there a Direct3D device which is like a Null Renderer or is there an option in a Direct3D device to disable rendering to a screen.

Or is my only option for speeding up performance to operate the SourceReader in asynchronous mode?

Any inputs will be appreciated.

Regards,

Dinesh

Viewing all 1079 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>